×

Pitting and non-pitting oedema

The distinction is essential to determine aetiology and treatment

Oedema can be divided into two types: pitting and non-pitting. These types are relatively easy to distinguish clinically and the distinction is essential to determine aetiology and treatment.

Oedema is the swelling of soft tissue due to fluid accumulation. Pitting is demonstrated when pressure is applied to the oedematous area and an indentation remains in the soft tissue after the pressure is removed (Box 1 and Box 2). Moreover, mild pitting oedema is best identified by applying pressure over an area of bony prominence. Non-pitting oedema refers to the lack of persistent indentation in the oedematous soft tissue when pressure is removed.1

In addition to the differentiation of pitting and non-pitting oedema, the pattern of distribution is reflective of the underlying aetiology. With pitting oedema, there may be bilateral dependent oedema of the lower limbs, generalised oedema or localised oedema. Non-pitting oedema generally affects an isolated area, such as a limb. There are two ways of describing the severity of pitting oedema. Most commonly, in the setting of peripheral oedema, severity is graded by its proximal extent, so that oedema located above the knee is more severe than oedema presenting below the knee. The alternative approach is based on depth and duration of pitting after the release of pressure (Box 3).2

A number of factors3 can be considered to be major contributors to the development of oedema:

  • increased intravascular hydrostatic pressure;

  • reduced intravascular oncotic pressure;

  • increased blood vessel wall permeability;

  • obstructed fluid clearance in the lymphatic system; and

  • increased tissue oncotic pressure.

The underlying pathophysiology for oedema explains the reason for pitting and non-pitting. Oedema is the accumulation of fluid in the interstitium. In normal circumstances, there is a balance between fluid leaking from capillaries and drainage by the lymphatics.3 In the setting of increased intravascular hydrostatic pressure, reduced oncotic pressure or where there is increased vessel wall permeability, fluid leaks out of vessels into the interstitial space. When external pressure is applied, extracellular fluid is displaced with increased drainage through the lymphatic system, creating an indentation that is visible in the skin and is described as pitting. When pressure is removed, the fluid slowly returns and the indentation disappears (see the video at mja.com.au). Lymphoedema, which is the most common form of non-pitting oedema, occurs when fluid accumulates in the interstitial space as a result of a reduction in lymphatic drainage. The application of pressure does not result in an indentation as there is an inability to drain fluid through the damaged lymphatic system.4 An uncommon form of non-pitting oedema, myxoedema, can occur as a result of accumulation of hydrophilic molecules in the subcutaneous tissue.5

The differentiation between pitting and non-pitting oedema, in addition to the pattern of distribution, reflects different pathophysiology and may therefore be helpful in identifying the underlying aetiology. Pitting peripheral lower limb oedema resulting from raised hydrostatic pressure can occur in congestive cardiac failure, venous insufficiency and as a result of the use of a calcium channel blocker. Generalised oedema may be seen in kidney disease, where reduced intravascular oncotic pressure occurs through protein loss or where increased vascular volume occurs through sodium and fluid retention. Localised pitting oedema is likely related to a local inflammatory process resulting in increased vessel wall permeability. Non-pitting oedema of an isolated area is consistent with failure of lymphatic drainage, which may rarely have a primary (hypoplastic) aetiology or, more commonly, a secondary (obstructive) aetiology. Secondary causes include external compression from a tumour, involvement of lymph nodes in metastatic disease, and lymph vessel damage following radiotherapy or following lymph node resection. Myxoedema is associated with thyroid disease.5

The underlying aetiology will guide treatment options. In the setting of congestive cardiac failure, treatment would generally include fluid restriction and diuretics (including spironolactone). Compression bandaging and leg elevation are useful for oedema related to venous insufficiency. In the setting of oedema related to inflammation, a general approach would include applying ice and elevation in addition to treating the underlying cause of the inflammatory process. Lymphoedema may be detected using a perometer and managed with compression garments and manual lymph drainage. Myxoedema may resolve with treatment of the underlying thyroid condition.

Box 1 –
Pressure applied to an oedematous area to demonstrate pitting

Box 2 –
Indentation remaining in soft tissue after pressure to an oedematus area is removed

Box 3 –
Alternative approach for grading the severity of pitting oedema based on depth and duration of pitting after release of pressure

Severity

Description


Grade 1+

A pit of up to 2 mm that disappears immediately

Grade 2+

A pit of 2–4 mm that disappears in 10–15 seconds

Grade 3+

A pit of 4–6 mm that may last more than 1 minute

Grade 4+

A deep pit greater than 6 mm that may last as long as 2–5 minutes


Adapted from Guelph General Hospital Congestive Heart Failure Pathway.2

Osteoporosis treatment: a missed opportunity

Minimal trauma fractures remain a major cause of morbidity in Australia, affecting one in two women and one in four men over the age of 60 years.1 Mortality is increased after all minimal trauma fractures, even after minor fractures.2 Hip fractures are particularly devastating, leading to decreased quality of life, increased mortality and loss of functional independence.3

Defining osteoporosis

Bone mineral density (BMD) is expressed in relation to either “young normal” adults of the same sex (T score) or to the expected BMD for the patient’s age and sex (Z score). Osteoporosis is defined as a T score ≤ 2.5 SDs below that of a “young normal” adult, with fracture risk increasing twofold to threefold for each SD decrease in BMD.4,5 A BMD Z score less than −2 indicates that BMD is below the normal range for age and sex, and warrants a more intensive search for secondary causes. Importantly, osteoporosis is also diagnosed after a minimal trauma fracture, irrespective of the patient’s T score.

Absolute fracture risk

Treatment for osteoporosis is recommended for patients with a high absolute fracture risk. This includes older Australians (post-menopausal women and men aged over 60 years) with T scores ≤ −2.5 at the lumbar spine, femoral neck or total hip, and patients with a history of a minimal trauma fracture.6 There is a major gap between evidence and treatment in secondary fracture prevention, with fewer than 20% of patients presenting with a minimal trauma fracture being treated or investigated for osteoporosis.7,8 However, it is important that patients with a low fracture risk, including younger women without clinical risk factors and T scores ≤ −2.5 at “non-main-sites” (eg, lateral lumbar spine or Ward’s triangle in the hip) are not treated.9 Absolute fracture risk calculators incorporate osteoporosis risk factors with BMD to stratify fracture probability.10 It is therefore important for clinicians to assess absolute fracture risk. Two of several absolute fracture risk calculators are commonly used to aid clinicians in this regard: the Garvan Fracture Risk Calculator11 and the Fracture Risk Assessment Tool (FRAX) developed by the World Health Organization.

The Garvan Fracture Risk Calculator estimates absolute fracture risk over 5 and 10 years (http://www.garvan.org.au/bone-fracture-risk/). It may be used in men and women aged over 50 years, and incorporates age, sex, BMD at the spine or femoral neck, falls and fracture history. A potential limitation of this tool is that it does not include other clinical risk factors. The country-specific FRAX tool calculates the 10-year probability of hip fracture and major osteoporotic fracture in patients aged 40–90 years. It incorporates femoral neck BMD with ten clinical risk factors. Limitations include underestimation of fracture risk in patients with multiple minimal trauma fractures, an inability to adjust the risk for dose-dependent exposure, a lack of validation for use with BMD of the spine, and exclusion of falls.

Role of fracture risk calculators in 2016

The role of absolute fracture risk calculators in clinical practice is evolving. In addition to their individual limitations, there is a lack of evidence that their use leads to effective targeting of drug therapy to those deemed to be at high risk of fracture,12 and prospective studies are needed. In particular, country-specific intervention thresholds based on absolute fracture risk need to be validated clinically. However, fracture risk calculators are useful for identifying patients with low fracture risk who do not require treatment.

Special patient groups

Limited evidence-based guidance is available for treating osteoporosis in several groups, including patients with post-transplantation osteoporosis, type 1 diabetes mellitus, chronic kidney disease (creatinine clearance < 30 mL/minute), neurological, respiratory and haematological diseases, and young adults and pregnant women. Such patients require individualised management.

Osteoporosis prevention using non-pharmacological therapies

Lifestyle approaches (adequate dietary calcium intake, optimal vitamin D status, participation in resistance exercise, smoking cessation, avoidance of excessive alcohol, falls prevention) act as a framework for improving musculoskeletal health at a population-based level.6,1316

Calcium and vitamin D

The current Australian recommended daily intake (RDI) of calcium is 1300 mg per day for women aged over 51 years, 1000 mg per day for men aged 51–70 years and 1300 mg per day for men aged over 70 years.17 Adverse effects of calcium supplementation include gastrointestinal bloating, constipation,18 and renal calculi.19 There is controversy about the efficacy of calcium in preventing osteoporotic fractures.6,19,20 Further work is required with studies powered to investigate cardiac outcomes in men and women receiving calcium supplementation to meet current RDIs. Higher dietary calcium intake is also associated with reductions in mortality, cardiovascular events and strokes.21 Dietary sources of calcium are the preferred sources. Calcium supplementation should be limited to 500–600 mg per day, and used only by those who cannot achieve the RDI with dietary calcium.15

The main source of vitamin D is through exposure to sunlight. Institutionalised or housebound older people are at particularly high risk of vitamin D deficiency. Inadequate vitamin D status is defined as a serum 25-hydroxyvitamin D (25(OH)D) level < 50 nmol/L in late winter/early spring; in older individuals such inadequate vitamin D levels are associated with muscular weakness and decreased physical performance.22 Increased falls and fractures occur at 25(OH)D levels < 25–30 nmol/L.23,24 Adults aged 50–70 years and those over 70 years require at least 600 IU to 800 IU of vitamin D3 daily, with larger daily doses required to treat vitamin D deficiency.25

Exercise

Community-based high speed, power training, multimodal exercise programs increase BMD and muscle strength, with a trend to falls reduction.26 Thus, exercise is recommended both to maintain bone health and reduce falls. It should be individualised to the patient’s needs and abilities, increasing progressively as tolerated by the degree of osteoporosis-related disability.

Falls prevention

Falls are the precipitating factor in nearly 90% of all appendicular fractures, including hip fractures,3 and reducing falls risk is critical in managing osteoporosis. Reducing the use of benzodiazepines, neuroleptic agents and antidepressants reduces the risk of falls,27 and, among women aged 75 or more years, muscle strengthening and balance exercises reduce the risk of both falls and injuries.28

Antiresorptive therapy for osteoporosis

Post-menopausal osteoporosis results from an imbalance in bone remodelling, such that bone resorption exceeds bone formation. Antiresorptive drugs decrease the number, activity and lifespan of osteoclasts,29 preserving or increasing bone mass with a resulting reduction in vertebral, non-vertebral and hip fractures. These drugs include bisphosphonates (oral or intravenous),3035 oestrogen36,37 and selective oestrogen receptor-modulating drugs,38 strontium ranelate and denosumab, a human monoclonal antibody against receptor activator of nuclear factor κB-ligand (RANKL).39

Antiresorptive treatments for osteoporosis are approved for reimbursement on the Pharmaceutical Benefits Scheme (PBS) for men and post-menopausal women following a minimal trauma fracture, as well as for those at high risk of fracture, on the basis of age (> 70 years) and low BMD (T score < −2.5 or −3.0). Bisphosphonates are also approved for premenopausal women who have had a minimal trauma fracture. In patients at high risk of fracture, osteoporosis therapy reduces the risk of vertebral fractures by 40–70%, non-vertebral fractures by about 25%, and hip fractures by 40–50%.3040

Bisphosphonates

Mechanism of action and efficacy. Bisphosphonates are stable analogues of pyrophosphate. They bind avidly to hydroxyapatite crystals on bone and are then released slowly at sites of active bone remodelling in the skeleton, leading to recirculation of bisphosphonates. The terminal half-lives of bisphosphonates differ; for alendronate it is more than 10 years,41 while for risedronate it is about 3 months.42

Alendronate prevents minimal trauma fractures. Therapy with alendronate reduces vertebral fracture risk by 48% compared with placebo. Similar reductions in the risk of hip and wrist fractures were seen in women treated with alendronate who had low BMD and prevalent vertebral fractures.33,34,43 A randomised, double-blind, placebo-controlled trial of post-menopausal women assigned to risedronate therapy or placebo for 3 years showed vertebral and non-vertebral fracture risks were respectively reduced by 41% and 39% by risedronate.35 Three years of treatment with zoledronic acid in women with post-menopausal osteoporosis reduced the risk of morphometric vertebral fracture by 70% compared with placebo, and reduced the risk of non-vertebral and hip fracture by 25% and 41% respectively.30

Adverse effects. The main potential adverse effects of oral bisphosphonates are gastrointestinal (including reflux, oesophagitis, gastritis and diarrhoea). Oral bisphosphonates should not be given to patients with active upper gastrointestinal disease, dysphagia or achlasia. Intravenous bisphosphonates are associated with an acute phase reaction (fever, flu-like symptoms, myalgias, headache and arthralgia) in about a third of patients, typically within 24–72 hours of receiving their first infusion of zoledronic acid, but is reduced significantly on subsequent infusions.30 Treatment with antipyretic agents, including paracetamol, improves these symptoms. Treatment with bisphosphonates may also lower serum calcium concentrations, but this is uncommon in the absence of vitamin D deficiency.44,45 Bisphosphonates are not recommended for use in patients with creatinine clearance below 30–35 mL/min.

Less common adverse effects associated with long term bisphosphonate therapy include osteonecrosis of the jaw (ONJ) and atypical femoral fracture (AFF). Overemphasis of these uncommon adverse effects by patients has led to declining osteoporosis treatment rates.46

Jaw osteonecrosis. ONJ is said to occur when there is an area of exposed bone in the maxillofacial region that does not heal within 8 weeks after being identified by a health care provider, in a patient who was receiving or had been exposed to a bisphosphonate and did not have radiation therapy to the craniofacial region.47 Risk factors for ONJ include intravenous bisphosphonate therapy for malignancy, chemotherapeutic agents, duration of exposure to bisphosphonates, dental extractions, dental implants, poorly fitting dentures, glucocorticoid therapy, smoking, diabetes and periodontal disease.48,49 The risk of ONJ is about 1 in 10 000 to 1 in 100 000 patient-years in patients taking oral bisphosphonates for osteoporosis.47 Given the prolonged half-life of bisphosphonates, temporary withdrawal of treatment before extractions is unlikely to have a significant benefit and is therefore not recommended.50

Atypical femur fractures. Clinical trial data clearly support the beneficial effect of bisphosphonates in preventing minimal trauma fractures. However, oversuppression of bone remodelling may allow microdamage to accumulate, leading to increased bone fragility.51 Cases of AFF and severely suppressed bone remodelling after prolonged bisphosphonate therapy52 have prompted further research and recent guideline development.53 However, this finding is not universal. AFFs occur in the subtrochanteric region or diaphysis of the femur and have unique radiological features, including a predominantly transverse fracture line, periosteal callus formation and minimal comminution, as shown in Box 1.53 AFFs have been reported in patients taking bisphosphonates and denosumab, but about 7% of cases occur without exposure to either drug. AFFs appear to be more common in patients who have been exposed to long term bisphosphonate therapy, with a higher risk (113 per 100 000 person-years) in patients who receive more than 7–8 years of therapy.53 Although many research questions remain unanswered, including aetiology, optimal screening and management of these fractures, the risk of a subsequent AFF is reduced from 12 months after cessation of bisphosphonate treatment.

Duration of therapy. Concerns about the small but increased risk of adverse events after long term treatment with bisphosphonates (Box 2) have led to the development of guidelines on the optimal duration of therapy.54 For patients at high risk of fracture, bisphosphonate treatment for up to 10 years (oral) or 6 years (intravenous) is recommended. For women who are not at high risk of fracture after 3 years of intravenous or 5 years of oral bisphosphonate treatment, a drug holiday of 2–3 years may be considered (Box 3). However, it is critical to understand that “holiday” does mean “retirement”, and those patients should continue to have BMD monitoring after 2–3 years.

Hormone replacement therapy

Hormone replacement therapy (HRT) is effective in preventing and treating post-menopausal osteoporosis. Benefits need to be balanced against thromboembolic and vascular risk, breast cancer risk (for oestrogen plus progesterone), and duration of therapy. HRT is most suitable for recently menopausal woman (up until age 59 years), particularly for those with menopausal symptoms. In women with an early or premature menopause, HRT should be continued until the average age of menopause onset (about 51 years), or longer in the setting of a low BMD. Oral or transdermal oestrogen therapy (in women who have had a hysterectomy) and combined oestrogen and progesterone therapy preserve BMD,55 and were also shown to reduce the risk of hip, vertebral and total fractures compared with placebo in the Women’s Health Initiative (WHI).37,56

In the initial WHI analysis, combined oral oestrogen and progesterone therapy for 5.6 years in post-menopausal women aged 50–79 years (who were generally older than women who used HRT for control of menopausal symptoms), many of whom had cardiovascular risk factors, was shown to increase the risk of breast cancer, stroke and thromboembolic events.57 However, subsequent reanalysis of WHI data has established the efficacy and safety of HRT in younger women up until 10 years after menopause, or the age of 59 years, when the benefits of treatment outweigh the risks. In women with a history of hysterectomy, oral oestrogen therapy alone has a better benefit–risk profile, with no increases in rates of breast cancer or coronary heart disease.56

Women commencing HRT should be fully informed about its benefits and risks. Cardiovascular risk is not increased when therapy is initiated within 10 years of menopause,58,59 but the risk of stroke is elevated regardless of time since menopause. It is also recommended that doctors discuss smoking cessation, blood pressure control and treatment of dyslipidaemia with women commencing HRT.

Selective oestrogen receptor modulator (SERM) drugs

The SERM raloxifene has beneficial oestrogen-like effects on bone, but has oestrogen antagonist activity on breast and endometrium. Treatment with raloxifene for 3 years reduced vertebral fractures by 30–50% compared with placebo in post-menopausal women.38 However, there was no reduction in non-vertebral fractures. Consequently, raloxifene is useful in post-menopausal women with spinal osteoporosis, particularly those with an increased risk of breast cancer. Raloxifene therapy is also associated with a 72% reduction in the risk of invasive breast cancer.60 Raloxifene may exacerbate hot flushes, and women receiving raloxifene have a greater than threefold increased incidence of thromboembolic disease, comparable with those receiving HRT.36,56 Raloxifene therapy is also associated with an increased risk of stroke,61 particularly in current smokers.

Denosumab

Denosumab is a human monoclonal antibody with specificity for RANKL, which stimulates the development and activity of osteoclasts. Denosumab mimics the endogenous inhibitor of RANKL, osteoprotegerin, and is given as a 60 mg subcutaneous injection once every 6 months. Denosumab reduces new clinical vertebral fractures by 68%, with a 40% reduction in hip fracture and a 20% reduction in non-vertebral fractures compared with placebo over 3 years.39,62

The adverse effects of denosumab include small increases in the risks of eczema, cellulitis and flatulence.39 Hypocalcaemia, particularly in patients with abnormal renal function, has also been reported,63 and denosumab is contraindicated in patients with hypocalcaemia. Jaw osteonecrosis has been reported in patients receiving denosumab for osteoporosis, as have AFFs.64,65

Strontium ranelate

Strontium ranelate increases bone formation markers and reduces bone resorption markers, but is predominantly antiresorptive, as increases in the rate of bone formation have not been demonstrated.66 Strontium ranelate significantly reduces the risk of vertebral and non-vertebral fractures.6769 The most frequent adverse effects associated with strontium ranelate are nausea, diarrhoea, headache, dermatitis and eczema.67,68 Cases of a rare hypersensitivity syndrome (drug reaction, eosinophilia and systemic symptoms [DRESS]) have been reported, and strontium ranelate should be discontinued if a rash develops. Strontium ranelate treatment was associated with an increased incidence of venous thromboembolism70 and, more recently, with a small increase in absolute risk of acute myocardial infarction. Strontium ranelate is contraindicated in patients with uncontrolled hypertension and/or a current or past history of ischaemic heart disease, peripheral arterial disease and/or cerebrovascular disease.71 This drug is now a second-line treatment for osteoporosis, only used when other medications for osteoporosis are unsuitable, in the absence of contraindications.

Anabolic therapy for osteoporosis

Teriparatide

Teriparatide increases osteoblast recruitment and activity to stimulate bone formation.40 In contrast to antiresorptive agents, which preserve bone microarchitecture and inhibit bone loss, teriparatide (recombinant human parathyroid hormone [1–34]) stimulates new bone formation and improves bone microarchitecture. Teriparatide reduced the risk of new vertebral fractures by 65% in women with osteoporosis who have had one or more baseline fractures40 and also reduced new or worsening back pain. Non-vertebral fractures are also reduced by 53% by teriparatide, but studies have been underpowered to detect reductions in the rate of hip fracture. Side effects include headache (8%), nausea (8%), dizziness and injection-site reactions. Transient hypercalcaemia (serum calcium level, > 2.60 mmol/L) after dosing also occurred in 3–11% of patients receiving teriparatide.

Teriparatide has a black box warning concerning an increased incidence of osteosarcoma in rats that were exposed to 3 and 60 times the normal human exposure over a significant portion of their lives. Teriparatide is therefore contraindicated in patients who may be at increased risk of osteosarcoma, including those with a prior history of skeletal irradiation, Paget’s disease of bone, an unexplained elevation in bone-specific alkaline phosphatase, bone disorders other than osteoporosis, and in adolescents and children.

In Australia, the maximum lifetime duration of teriparatide therapy is 18 months. However, the antifracture benefit increases the longer the patient remains on treatment, with non-vertebral fractures being reduced for up to 2 years of treatment compared with the first 6 months of treatment, and for up to 2 years following cessation of treatment.72 In addition, increases in the rates of trabecular and cortical bone formation continue for up to 2 years of treatment, refuting the outmoded concept of a limited “anabolic window” of action for this drug.73 Importantly, following teriparatide therapy, the accrued benefits will be lost if antiresorptive therapy is not immediately instituted. Teriparatide reimbursement through the PBS is restricted to patients who have had two minimal trauma fractures and who have a fracture after at least a year of antiresorptive therapy, and who have a BMD T score below −3. However, the rate of teriparatide use in Australia is among the lowest in the world (David Kendler, University of British Columbia, Canada, personal communication).

Future directions

Three new anti-osteoporosis drugs are in clinical development.

“Selective” antiresorptive drugs

A novel “selective” antiresorptive drug, odanacatib, is a cathepsin K inhibitor that has the advantage of not suppressing bone formation, as do traditional or “non-selective” antiresorptive drugs. Clinical trial data in the largest ever osteoporosis trial, published in abstract form, show that odanacatib, given as a weekly tablet, reduces vertebral, non-vertebral and hip fractures with risk reductions similar to those seen with bisphosphonates. Adverse events were reported and include atypical femur fractures, morphea and adjudicated cerebrovascular events.74 The benefit–risk profile of this drug is currently being clarified.

Anabolic drugs

The two other new drugs are anabolic agents. Abaloparatide, an analogue of parathyroid hormone-related protein (1–34), selectively acts on the type 1 parathyroid hormone receptor to stimulate bone formation. It is given as a daily injection.75 It reduces vertebral and non-vertebral fractures, but data for hip fracture are lacking.76 Abaloparatide reduced major osteoporotic fractures by 67% compared with placebo.77 Abaloparatide will also have a black box warning about osteogenic sarcoma in rats. The final drug, romosozumab, is a monoclonal antibody that targets an inhibitor of bone formation, sclerostin, and is given as 2-monthly injections for 12 months. Trial data comparing reductions in fractures with placebo are awaited, and a head-to-head trial comparing the antifracture efficacy of romosozumab with alendronate is ongoing.

Conclusion

Osteoporosis treatment represents a missed opportunity for medical practitioners. Despite a growing number of effective therapies, where the benefits far outweigh the risks, only a minority of patients presenting to the health care system with minimal trauma fractures are being either investigated or treated for osteoporosis.

The time to close this gap between evidence and treatment is long overdue and will require systems-based approaches supported by both the federal and state governments. One such approach is fracture liaison services, which have proven efficacy in cost-effectively reducing the burden of fractures caused by osteoporosis, and are increasingly being implemented internationally. General practitioners also need to take up the challenge imposed by osteoporosis and become the champions of change, working with the support of specialists and government to reduce the burden of fractures caused by osteoporosis in Australia.

Box 1 –
Bilateral atypical femoral fractures in an older woman after bisphosphonate therapy for 9 years*


* Note the characteristic findings of a predominantly transverse fracture line, periosteal callus formation and minimal comminution on the left, and the periosteal reaction on the lateral cortex on the right femur, indicating an early stress fracture.

Box 2 –
Balancing benefits and risks of bisphosphonate therapy with other lifetime risks*


* Adapted from Adler, et al.54

Box 3 –
Approach to the management of post-menopausal women on long term bisphosphonates therapy for osteoporosis*


DXA = dual-energy x-ray absorptiometry. * Adapted from Adler, et al.54 † Includes age > 70 years; clinical risk factors for fracture and osteoporosis; fracture risk score on fracture risk calculation tools above the Australian treatment threshold. ‡ Cessation of treatment for 2–3 years.

Low HIV testing rates among people with a sexually transmissible infection diagnosis in remote Aboriginal communities

The known  Sexually transmissible infection (STI) guidelines recommend full STI screening, including testing for HIV and syphilis, for people diagnosed with any STI.

The new  Analysis of clinical data for 2010–2014 from 65 remote Aboriginal communities indicated that about one-third of people with positive test results for chlamydia, gonorrhoea or trichomoniasis were tested for HIV within 30 days of the STI test, as were about one-half of those tested for syphilis.

The implications  Adhering to HIV and syphilis screening recommendations is clearly an area for improvement in the delivery of sexual health services to remote communities.

A significant challenge in Aboriginal and Torres Strait Islander health is averting a major outbreak of human immunodeficiency virus infection (HIV) as has occurred in indigenous populations in other countries.1,2 Although the number of HIV diagnoses among Aboriginal people has been relatively stable over the past 20 years, there are now early warning signs of an increase. The number of cases is small, but standardised population rates of HIV diagnoses in Indigenous and non-Indigenous Australians have diverged over the past 5 years: the population rate is now almost twice as high for Aboriginal as for Australian-born non-Indigenous people (5.9 per 100 000 v 3.7 per 100 000 population). In addition, there are differences in the way HIV is transmitted in the two populations: a higher proportion of infections among Aboriginal people are attributed to injecting drug use (16% v 3%) or heterosexual sex (20% v 13%), and the proportion of female patients is higher (22% v 5%) than among non-Indigenous Australians.3

There are several risk factors for HIV, including social, psychological and individual aspects. However, of particular significance for the Aboriginal population are the higher endemic rates of sexually transmissible infections (STIs) such as chlamydia, gonorrhoea and trichomoniasis,4,5 as well as an ongoing outbreak of syphilis (almost 1000 cases across northern and remote Australia),6 all of which increase the risk of HIV transmission.7

One of the critical factors for preventing HIV in any population is timely, targeted and appropriate testing. In Australia, HIV and STI care guidelines include specific recommendations for Aboriginal and Torres Strait Islander populations,8,9 including testing for an undiagnosed HIV infection when another STI is diagnosed. Timely testing and diagnosis can prevent the spread of HIV, as people may reduce their sexual risk behaviour once they are aware of their positive status.10 Further, early detection can facilitate early treatment, and the risk of transmission remains extremely low if individuals can sustain an undetectable HIV viral load by adhering to highly active anti-retroviral treatments.11,12

Despite awareness for more than two decades of the very high notification rates of chlamydia, gonorrhoea, trichomoniasis and syphilis in many remote communities, there is a gap in our knowledge about the extent of HIV testing, including concurrent testing with other STI diagnostic testing. We report here on an analysis of clinical and laboratory records from 65 remote Aboriginal communities participating in the randomised, controlled community trial, STRIVE (STI in remote communities: improved and enhanced primary health care).13 The communities are located in four regions (two in the Northern Territory, one in northern Western Australia and one in Far North Queensland; anonymised in our reported results), and the approximate combined community population of people aged 16–34 years was 28 000 according to Australian Bureau of Statistics data. The trial examined whether a sexual health quality improvement program could increase STI testing to a level sufficient to reduce the community prevalence of STIs. Our aim was to determine the level of concurrent HIV testing of individuals who had received positive results for chlamydia, gonorrhoea or trichomoniasis, and of concurrent HIV testing of people tested for syphilis.

Methods

The STRIVE trial collected de-identified data from pathology laboratories for the period January 2010 – December 2014. The primary outcome of our study was the rate of concurrent HIV testing (same day testing, or within 30 or 90 days of the diagnostic test for chlamydia, gonorrhoea or trichomoniasis) of people who tested positive for any of these three STIs. HIV testing referred to any HIV screening test conducted by the laboratory; no HIV rapid tests were used by the participating health services.

The unit of analysis was the episode of STI testing (tests for any of the three STIs on the same date). All episodes of STI testing that resulted in a positive result for any of the three STIs were included in the denominator for calculating the HIV testing rate. We focused on chlamydia, gonorrhoea and trichomoniasis because urine or a swab is collected when testing for these STIs, but not blood (which is needed for HIV testing). We selected 30 days as the cut-off point because most people would return for STI treatment within this period, and there would be an opportunity to collect blood for HIV testing if it had not been collected at the initial consultation. We separately analysed the period 1–30 days (ie, excluding same day HIV testing) and testing within 90 days.

Secondary outcomes were the rate of concurrent syphilis testing (within 30 days of a positive STI test), and the rate of HIV testing (on the same day and within 30 days of the syphilis test) among people tested for syphilis (regardless of the test result), as syphilis testing requires collecting blood, thereby making HIV testing more convenient. We analysed syphilis separately from the other three STIs because it was difficult to distinguish between latent and infectious syphilis cases on the basis of the datasets to which we had access; for latent cases, the decision as to whether an HIV test should be ordered would be determined by the clinician.

We used multivariate logistic regression to determine factors independently associated with HIV testing and syphilis testing within 30 days of an STI test with a positive result. Age group, sex, year, geographic region, and year of the positive test were included in the model. We examined models adjusted for clustering by patient, clinic, and region, and also a model with a patient random effect, as it accounted for most variation; this final version is reported in this article. All analyses were conducted in Stata 14 (StataCorp).

Ethics approval

The STRIVE trial was approved by the Central Australian Human Research Ethics Committee (HREC) (reference, 2009.11.03), the HREC of the NT Department of Health and Families and the Menzies School of Health Research (reference, 09/98), the University of New South Wales HREC (B) (reference, HREC 10112), the WA Aboriginal Health Information and Ethics Committee (reference, 267-11/09), the WA Country Health Service Board Research Ethics Committee (reference, 2010: 04), and the Cairns and Hinterland, Cape York, Torres Strait and Northern Peninsula HREC (reference, HREC/09/QCH/122). Participating health services signed a site participation agreement before commencing involvement in STRIVE.

Results

During the 2010–2014 study period, there were 15 260 positive test results for STIs (chlamydia, gonorrhoea or trichomoniasis), including 4190 in men and 11 055 in women; there were 5015 positive chlamydia, 4546 positive gonorrhoea and 8954 positive trichomoniasis test results. Of the 15 260 positive test results, 31.8% were associated with an HIV test within 30 days, including 5.6% between 1 and 30 days (ie, excluding same day testing) (Box 1); 34.8% were associated with an HIV test within 90 days (data not shown). When analysed by geographical region, the proportion of people with a positive STI test who had an HIV test within 30 days ranged between 29.6% and 40.2%. Of all people tested for syphilis (regardless of the test result), 53.4% were also tested for HIV within 30 days (Box 2). Further, 44.1% of those who received a positive STI test result were tested for syphilis within 30 days of the STI test (Box 1).

Multivariate analysis found that HIV testing within 30 days of a positive STI test was more likely for men, in geographical regions 3 and 4 (v region 1; and less likely in region 2), and in association with positive STI test results during 2012, 2013 or 2014 (v 2010) or with positive STI tests for gonorrhoea or chlamydia (v other two STIs combined). Similar associations pertained to syphilis testing within 30 days of an STI test with a positive result (Box 3).

Discussion

We found a low rate of HIV testing within 30 days of an STI diagnostic test with a positive result in remote communities with persistently high rates of curable STIs. About one-third of all people with positive STI test results were tested for HIV within 30 days, irrespective of age group. The rate was significantly higher in men, which may reflect more full STI screens being undertaken in men presenting with symptoms or risk behaviour. There was a slightly higher rate of HIV testing when blood for syphilis testing was collected, but it was still less than optimal according to current clinical recommendations. Most HIV testing occurred on the same day as other STI testing (94%), suggesting that full STI screens were being undertaken.

The rate of HIV testing within 30 days of the STI test varied somewhat between health services in different geographical regions, but did not exceed 40.2% in any region. The low rate of HIV testing we observed is not confined to communities in northern Australia. Preliminary data from a study of four Aboriginal primary health care services in urban and regional areas of New South Wales also indicate that the rate of HIV testing associated with a positive STI diagnosis was low (42%), and, similar to what we found, most tests (82%) were conducted on the same day as the other STI test.14

We found that the rate of HIV testing within 30 days of an STI test with a positive result increased across the STRIVE study period; a formal analysis is evaluating whether this difference can be attributed to the intervention.

The strengths of our study include the large dataset, comprising data for patients with an STI diagnosis from 65 remote communities across the NT, WA and Queensland. In addition, capture of records of STI testing was complete, as each community participating in STRIVE has only one clinical service provider and used one of three pathology laboratories that provided data on all testing undertaken during the study period. A limitation is that we only had access to information for those who consented to HIV testing; it is therefore possible that some individuals were offered testing but declined. We did not collate the results of HIV testing in the study, and it is possible that some patients known to be HIV-positive were included in the study, who would therefore not have required HIV testing. However, this would only account for a very small number of people, given the low rate of HIV-positivity in remote health services.

HIV testing is important at the patient level and from a public health perspective, ensuring that people with HIV are identified quickly in order to reduce transmission by individuals who may not know their status, to enable rapid contact tracing, an efficient strategy for identifying undiagnosed infections,15 and so that patients with HIV can start treatment as early as possible.

Our study identified adherence to HIV screening recommendations as an area in the delivery of sexual health services to remote communities that clearly needs to be improved. Barriers to offering HIV testing in these settings should be investigated. Urgently needed are training and systems that increase awareness of clinical guidelines among clinical staff and support their implementing these guidelines when testing for STIs in remote communities. As nearly 40% of young people in the services we investigated had been diagnosed with at least one of the three STIs,4 and in view of the current syphilis outbreak, offering a full STI screen is likely to be an efficient way to improve HIV testing rates. Failure to increase HIV testing risks an outbreak that may be difficult to control in remote settings. HIV testing should be a priority, and its uptake should be routinely audited by those managing sexual health programs or remote area clinics.

Box 1 –
HIV testing of people aged 16–34 years attending 65 remote primary health care services within 30 days of a sexually transmissible infection (STI)* diagnostic test for which the result was positive, 2010–2014

Any positive STI test

Testing within 30 days of the STI test (including same day)


Testing within 30 days of the STI test (excluding same day)


HIV

Syphilis

HIV

Syphilis


Total

15 260

4858 (31.8%)

6727 (44.1%)

854 (5.6%)

1099 (7.2%)

Sex

Men

4190

2035 (48.6%)

2355 (56.2%)

208 (5.0%)

209 (5.0%)

Women

11 055

2815 (25.5%)

4361 (39.4%)

646 (5.8%)

889 (8.0%)

Age group (years)

16–19

3924

1305 (33.3%)

1761 (44.9%)

259 (6.6%)

302 (7.7%)

20–24

3827

1282 (33.5%)

1777 (46.4%)

233 (6.1%)

300 (7.8%)

25–29

2486

819 (33.0%)

1106 (44.5%)

119 (4.8%)

171 (6.9%)

30–34

1597

498 (31.2%)

686 (42.9%)

83 (5.2%)

112 (7.0%)

≥ 35

3416

954 (27.9%)

1397 (40.9%)

163 (4.8%)

214 (6.3%)

Region

1

4320

1314 (30.4%)

1528 (35.4%)

121 (2.8%)

161 (3.7%)

2

7670

2269 (29.6%)

3639 (47.4%)

435 (5.7%)

633 (8.3%)

3

1087

437 (40.2%)

620 (57.0%)

131 (12.1%)

105 (9.7%)

4

2183

838 (38.4%)

940 (43.1%)

170 (7.8%)

200 (9.2%)

Year

2010

2658

765 (28.8%)

1095 (41.2%)

167 (6.3%)

193 (7.3%)

2011

2994

907 (30.3%)

1389 (46.4%)

196 (6.5%)

253 (8.5%)

2012

3044

935 (30.7%)

1372 (45.1%)

175 (5.7%)

220 (7.2%)

2013

3133

990 (31.6%)

1347 (43.0%)

138 (4.4%)

205 (6.5%)

2014

3425

1261 (36.8%)

1524 (44.5%)

181 (5.3%)

228 (6.7%)

Type of infection

Chlamydia

5015

1883 (37.5%)

2439 (48.6%)

361 (7.2%)

396 (7.9%)

Gonorrhoea

4546

1787 (39.3%)

2101 (46.2%)

279 (6.1%)

342 (7.5%)

Trichomoniasis

8954

2360 (26.4%)

3703 (41.4%)

437 (4.9%)

638 (7.1%)


* Chlamydia, gonorrhoea or trichomoniasis. † Missing values not included. ‡ Data missing for 15 people.

Box 2 –
HIV testing of people aged 16–34 years attending 65 remote primary health care services within 30 days of a test for syphilis, 2010–2014

Syphilis test*

HIV testing within 30 days (including same day)


Total

46 744

24 961 (53.4%)

Sex

Men

19 718

11 743 (59.6%)

Women

26 961

13 192 (48.9%)

Age group (years)

16–19

6481

3640 (56.2%)

20–24

9306

5114 (55.0%)

25–29

8095

4512 (55.7%)

30–34

6295

3474 (55.2%)

≥ 35

16 553

8220 (49.7%)

Region

1

8221

4765 (58.0%)

2

27 533

13 369 (48.6%)

3

4978

2124 (42.7%)

4

6012

4703 (78.2%)

Year

2010

7576

2765 (36.5%)

2011

9546

3802 (39.8%)

2012

9812

5228 (53.3%)

2013

9559

6043 (63.2%)

2014

10 241

7123 (69.6%)


* Missing values not included.

Box 3 –
Multivariate model of HIV and syphilis testing within 30 days of a sexually transmissible infection (STI)* diagnostic test for which the result was positive (including same day as initial test)

HIV test within 30 days


Syphilis test within 30 days


Odds ratio

95% CI

P

Odds ratio

95% CI

P


Sex

Women

1

1.00

Men

2.67

2.43–2.92

< 0.01

2.12

1.95–2.31

< 0.01

Age group (years)

16–19

1

1.00

20–24

1.07

0.96–1.19

0.23

1.08

0.98–1.20

0.11

25–29

1.08

0.96–1.22

0.20

1.03

0.92–1.15

0.61

30–34

1.06

0.92–1.23

0.42

0.99

0.87–1.13

0.94

≥ 35

0.93

0.82–1.05

0.22

0.87

0.78–0.97

0.01

Region

1

1

1.00

2

0.87

0.79–0.95

< 0.01

0.52

0.50–0.57

< 0.01

3

1.52

1.30–1.77

< 0.01

1.43

1.24–1.66

< 0.01

4

1.28

1.14–1.44

< 0.01

0.74

0.67–0.83

< 0.01

Year

2010

1

1.00

2011

1.10

0.97–1.25

0.13

1.30

1.16–1.45

< 0.01

2012

1.14

1.01–1.30

0.04

1.25

1.12–1.40

< 0.01

2013

1.24

1.09–1.40

< 0.01

1.17

1.04–1.31

0.01

2014

1.61

1.42–1.82

< 0.01

1.24

1.11–1.39

< 0.01

Type of infection

Chlamydia

1.28

1.10–1.34

< 0.01

1.22

1.11–1.34

< 0.01

Gonorrhoea

1.52

1.16–1.42

< 0.01

1.11

1.01–1.22

0.03

Trichomoniasis

0.87

0.84–1.05

0.24

1.10

0.99–1.21

0.08


* Chlamydia, gonorrhoea or trichomoniasis. † The model was adjusted for individual clustering by including an individual patient random effect. ‡ Reference group for each comparison consists of patients with the other two sexually transmissible infections.

The Paleo diet and diabetes

Studies are inconclusive about the benefits of the Paleo diet in patients with type 2 diabetes

Type 2 diabetes is characterised by fasting hyperglycaemia as a result of insulin resistance and defects in insulin secretion. Obesity is the major risk factor for the development of the condition and a number of studies — including the Diabetes Prevention Program, the Da Qing IGT and Diabetes Study, and the Finnish Diabetes Prevention Study — have shown that lifestyle modification (diet and exercise) can significantly prevent the progression of glucose intolerance (prediabetes) to diabetes by up to 58%.13 In addition, a recent study showed that a very-low-calorie diet for 8 weeks resulted in remission of type 2 diabetes for at least 6 months in 40% of the participants.4 As such, clinical guidelines prescribe lifestyle modification as first-line treatment for type 2 diabetes and indeed throughout the management of the disease process.5 Therefore, it is clear that dietary intervention is a critical component of the glucose-lowering strategy in diabetes.

The Paleolithic or hunter–gatherer diet is currently popular for weight loss, diabetes management and general wellbeing. It recommends avoidance of processed food, refined sugars, legumes, dairy, grains and cereals, and instead it advocates for grass-fed meat, wild fish, fruit, vegetables, nuts and “healthy” saturated fat. In the early 1980s, O’Dea showed that 7 weeks of living as hunter–gatherers and consuming a high-protein, low-fat diet with an energy intake of 5020 kJ per person per day significantly improved or normalised the metabolic abnormalities of Indigenous Australians with type 2 diabetes.6 Thus, in its purest sense, the focus on fresh foods and avoidance of processed foods seems reasonable and consistent with dietary guidelines worldwide. However, what constitutes a Paleolithic diet is often skewed by individual interpretation or bias. This lack of a standard definition further complicates research evidence for or against this dietary approach and is often supported by individual self-reported benefits on health and wellbeing in popular social media channels. Is there scientific evidence that the Paleolithic diet is better for diabetes management than any other diet that advocates reducing energy intake?

Given its popularity, it was somewhat surprising that a PubMed search using the terms “Paleolithic diet and diabetes” resulted in only 23 articles, with many being reviews or commentaries. This is a similar outcome to a recently published systematic review of Paleolithic nutrition and metabolic syndrome.7 Clinical studies in patients with type 2 diabetes have only been performed by two research groups. Lindeberg and colleagues, from Sweden, published a randomised crossover study of the effects of a 3-month Paleolithic diet compared with a diabetes diet (according to current guidelines) in 13 obese (body mass index [BMI] of 30 ± 7 kg/m2) well controlled (glycated haemoglobin [HbA1c], 48.6 ± 1.5 mmol/mol) patients with type 2 diabetes.8 The data showed that while both diets resulted in a reduction in BMI and HbA1c, the Paleolithic diet achieved a significantly lower absolute value for these parameters. However, it is important to note that the patients on the Paleolithic diet had a lower BMI and HbA1c at baseline and at the 3-month crossover, so it is not clear whether the relative reductions were similar with these diets. In addition, although there was no significant difference in oral glucose tolerance, the high-density lipoprotein levels were higher and triglyceride levels and diastolic pressure were lower with the Paleolithic diet. It is interesting that, based on a 4-day diet diary halfway through the intervention, the patients on the Paleolithic diet consumed less total energy. A follow-up study suggested that the Paleolithic diet may well be more satiating in patients with type 2 diabetes.9 In support of these results, Frassetto and colleagues showed, in a 14-day study of patients with type 2 diabetes, that both the Paleolithic diet (including canola oil and honey; n = 14) and standard diet (according to the American Diabetes Association recommendations; n = 10)10 resulted in a small reduction in HbA1c levels, with no differences in insulin resistance (as assessed with a euglycaemic–hyperinsulinaemic clamp), blood pressure or blood lipids between the diets.11 There was, however, a beneficial effect of the Paleolithic diet only when compared with baseline for fasting plasma glucose, fructosamine, lipid levels and insulin sensitivity. It is important to note that canola oil is generally not considered a component of a Paleolithic diet. Moreover, this study was designed to maintain body weight at the baseline level in both groups of patients, with the result being a small but significant weight loss of 2.1 ± 1.9 kg and 2.4 ± 0.7 kg in the standard and Paleolithic diets respectively. In summary, these small and short-term studies tend to indicate some benefit but do not convincingly show that a Paleolithic diet is effective for weight loss and glycaemic control in type 2 diabetes.

In addition to the above studies of patients with type 2 diabetes, the Paleolithic diet has also been studied in healthy normal-weight individuals.12 Compared with a reference meal (based on the World Health Organization guidelines),13 there was very little effect on plasma glucose and insulin levels during an oral glucose tolerance test, but statistically significant increases were found in plasma glucagon-like peptide-1, glucose-dependent insulinotropic peptide and peptide YY. These hormone changes were associated with a higher satiety score. One of the Paleolithic meals used in this study caused an increase in the glucose excursion associated with a reduction in the insulin excursion during the glucose tolerance test.12 Similarly, in nine overweight healthy individuals, a Paleolithic diet for 10 days resulted in no change in fasting plasma glucose or insulin levels, but it showed reduced plasma lipid levels and blood pressure compared with the baseline usual diet.14 It is interesting that, while insulin levels during an oral glucose tolerance test were lower with the Paleolithic diet compared with baseline, the authors did not report the glycaemic excursions during this test. Moreover, a 2-week study in obese patients (n = 18) with the metabolic syndrome did not show an effect on glucose tolerance, but it resulted in reduced blood pressure and plasma lipid levels associated with a small but significant decrease in weight.15 In patients with ischaemic heart disease plus either glucose intolerance or type 2 diabetes (n = 14), a Paleolithic diet for 12 weeks resulted in reduced glucose and insulin excursions during the glucose tolerance test and was associated with a 26% reduction in energy intake, compared with a Mediterranean-style diet (n = 15).16 Again, in the absence of changes in weight or energy intake, the Paleolithic diet is as effective in improving the above metabolic parameters as a standard diet.

Thus, given that even very short deficits in energy balance can improve metabolic parameters,17 it is difficult to make strong conclusions about the long term benefits of the Paleolithic diet in type 2 diabetes (or any other condition), because of the short duration of the interventions (less than 12 weeks), the lack of a proper control group in some instances, and the small sample size (less than 20 individuals) of the above studies. While it makes sense that the Paleolithic diet promotes avoidance of refined and extra sugars and processed energy dense food, clearly more randomised controlled studies with more patients and for a longer period of time are required to determine whether it has any beneficial effect over other dietary advice.

What are the top 10 drugs used in Australia?

For the first time in 20 years, statins have not topped the list of the most costly drug for the Australian government to fund.

Australian Prescriber has released the top 10 drugs used in Australia as well as the top 10 by cost to government. The figures are based on PBS and RPBS prescriptions.

Atorvastatin dropped out of the top 10 by cost to government, however it still topped the lists for daily dose and prescription counts.

The most expensive drug for the government is Adalimumab, a monoclonal antibody indicated for the treatment of Rheumatoid Arthritis, Juvenile Idiopathic Arthritis, Psoriatic Arthritis, Psoriatic Arthritis,  Ankylosing Spondylitis. Crohn’s Disease, Ulcerative colitis, Psoriasis and Hidradenitis Suppurativa which cost the government $311 616 305 for 176,062 prescriptions from July 2014 – June 2015.

Related: Millions wasted on discarded drugs

Two injectable drugs to treat age-related macular degeneration appeared for the first time. Aflibercept came in at third most expensive for the government, costing nearly $193 million for 123 123 prescriptions and Ranibizumab was fourth most expensive, costing nearly $180 million for 116 311 prescriptions.

On the most prescribed list were two statins (Atorvastatin in number 1 and Rosuvastatin  in number 3) as well as proton-pump inhibitors (Esomeprazole in number 2 and Pantoprazole in number 5), analgesics (paracetamol in number 4) and type 2 diabetes (Metformin in number 6).

See the full list at Australian Prescriber.

Latest news:

Medicinal cannabis can now be prescribed by NSW GPs

New regulation means that from 1st August 2016, NSW doctors can seek approval to write up scripts of medicinal cannabis for patients who need it.

Previously, patients could only legally access cannabis-based medicines through clinical trials. However thanks to changes under the Poisons and Therapeutic Goods Amendment (Designated Non-ARTG Products) Regulation 2016 (under the Poisons and Therapeutic Goods Act 1966), the drugs can now be prescribed for patients who have exhausted their standard treatment options.

“People who are seriously ill should be able to access these medicines if they are the most appropriate next step in their treatment,” NSW Premier Mike Baird said on Sunday.

Related: Slow and steady on medicinal cannabis

How do doctors get approval to prescribe?

In order to prescribe the drugs, doctors will need to get approval from both the Commonwealth Therapeutic Goods Administration and NSW Health.

According to NSW Health, in making their decision, the Commonwealth “will consider the prescriber’s expertise, the suitability of the product to treat the patient’s condition, and the quality of the product.”

A committee of medical experts from NSW Health will review the prescriber’s application, and will consider “whether the unregistered cannabis-based product is being appropriately prescribed for the patient’s condition.”

Related: MJA – Medicinal cannabis in Australia: the missing links

What can be prescribed?

Some cannabis-based products have already been assessed for quality, safety and efficacy by the medicines regulator. These include:

  • Nabiximols (Sativex®) – registered in Australia with the Therapeutic Goods Administration for managing spasticity associated with multiple sclerosis.
  • Dronabinol – registered by the US Food and Drug Administration for anorexia in patients with AIDS and chemotherapyinduced nausea and vomiting, where standard treatment has failed.
  • Nabilone – registered by the US Food and Drug Administration for chemotherapyinduced nausea and vomiting.

Although applications aren’t limited to the above products, the products applied for must be legally produced and manufactured to appropriate quality standards. There must also be evidence that supports use for that product for the patient.

How do doctors apply?

For more information and to apply for authority to prescribe and supply cannabis products, visit NSW Health’s Pharmaceutical page. More information can also be found at their Cannabis and cannabis products information site.

Latest news:

Your postcode shouldn’t determine your health – or whether you’re admitted to hospital

People ending up in hospital for diabetes, tooth decay, or other conditions that should be treatable or manageable out of hospital is a warning sign of system failure. And Australia’s health system is consistently failing some communities.

A Grattan Institute report, Perils of place: identifying hotspots of health inequalities, released today, identifies a number of geographical areas where high rates of potentially preventable hospital admissions have persisted for a decade. This is unacceptable place‑based inequality.

Using data from Queensland and Victoria, the report identifies 38 places in Queensland and 25 in Victoria that have had potentially preventable hospitalisation rates at least 50% higher than the state average in every year for a decade. There is no evidence to suggest the pattern is any different in other states and territories.

Reducing potentially preventable hospitalisations in these places to average levels would save at least A$10 million a year for the Queensland and Victorian health systems. Indirect savings, such as improving the productivity of the people affected, should be significantly larger.

Different places, different problems

Some of the areas identified as having high rates of potentially preventable admissions were in remote areas such as Mt Isa in Queensland. Others were in suburban centres such as Broadmeadows in Melbourne.

In some places, the high rates of admissions were driven by high rates of re-admissions – a small number of people each having a large number of admissions each year. In these places, better targeting care to high-risk individuals may help to reduce rates.

Yet in other places, re-admissions did not contribute to the problem at all.

Areas that have a low socioeconomic status, are regional, and/or have a high proportion of Indigenous people are more likely to experience health inequalities.

But even in Australia’s most disadvantaged areas, persistently high rates of potentially preventable hospitalisations are rare. Because many such areas have low rates of potentially preventable hospitalisations, examining why some have a problem while others do not may help to understand what needs to improve.

What can governments do about it?

The Grattan Institute’s report has three clear messages for governments and local health agencies such as Primary Health Networks.

First, make sure prevention efforts are focused in places where high rates of potentially preventable hospitalisations have existed for a while. These are the places where health inequalities are already entrenched and, without intervention, are most likely to endure.

On average, about half of areas which had a high rate of potentially preventable hospitalisations in one year had dropped back to closer to the state average the next year (55% in Victoria, 45% in Queensland). This means that if governments or Primary Health Networks make their intervention decisions based on just one year of data, they will have a false sense of reassurance that their interventions are working when in fact their success might just be the result of random chance.

Second, think local. Australia is not a uniform country and a one-size-fits-all approach will not work. Some areas may have excellent local primary health care services but, in the face of very severe disease burdens, the area ends up with a high rate of potentially preventable hospitalisations. Other areas might have poor access to primary care services.

There is no uniform pattern for the causes of high rates of potentially preventable hospitalisations. Tailored policy responses are required.

Primary Health Networks have been given responsibility to identify and address health needs in their regions. They must identify the areas with high rates of potentially preventable hospitalisations and distil why these rates are occurring. They then need to design locally tailored responses, in partnership with local health authorities and communities.

Unfortunately, there is as yet only limited evidence of what works in reducing potentially preventable hospitalisations. Governments should therefore invest in trials to reduce potentially preventable hospitalisations in places identified as having high rates.

The cost-effectiveness of interventions must be established on a small scale before they are rolled out to further areas.

This leads to the third message: interventions must be rigorously evaluated so they expand the evidence about what works. As Primary Health Networks become more sophisticated at identifying the people most in need and as the evidence from trials builds, efforts to reduce health inequalities should be strengthened and expanded beyond the priority places identified here.

The role of place in shaping people’s health and opportunity is well-established. Governments and Primary Health Networks must ensure all communities get a fair go.

Improving the health of people in these places with high rates of potentially preventable hospitalisations will, in the long-run, reduce health costs. Even more importantly, it will increase social cohesion and inclusion, workforce participation and productivity, by making many more people healthy and able to make the most of their lives.

Stephen Duckett, Director, Health Program, Grattan InstituteThe ConversationThis article was originally published on The Conversation. Read the original article

Other doctorportal blogs

Appropriate use of serum troponin testing in general practice: a narrative review

In this article, we review the evidence regarding troponin testing in a community setting, particularly relating to new information on the utility of high sensitivity assays and within the context of contemporary guidelines for the management of chest pain and the acute coronary syndrome. For this review, we synthesised relevant evidence from PubMed-listed articles published between 1996 and 2016 and our own experience to formulate an evidence-based overview of the appropriate use of cardiac troponin assays in clinical practice. We included original research studies, focusing on high quality randomised controlled trials and prospective studies where possible, systematic and other review articles, meta-analyses, expert consensus documents and specialist society guidelines, such as those from the National Heart Foundation of Australia and Cardiac Society of Australia and New Zealand. This article reflects our understanding of current state-of-the-art knowledge in this area.

What is the purpose of the serum troponin assay?

The troponin assay was designed to assist in diagnosis and improve risk stratification for people presenting in the emergency setting with symptoms suggestive of an acute coronary syndrome.1,2 These symptoms include:

  • chest, jaw, arm, upper back or epigastric pain or pressure

  • nausea

  • vomiting

  • dyspnoea

  • diaphoresis

  • sudden unexplained fatigue.

As the troponin assay was not designed for use in clinical contexts outside that of a possible acute coronary syndrome, an elevated troponin level in a patient without this history, although of prognostic value, is not likely to be due to myocardial infarction unless it was caused by a clinically silent event. The troponin test result should always be interpreted with reference to symptoms, comorbidities, physical examination findings and the electrocardiogram (ECG). The degree of troponin elevation is also used for quantifying the size of myocardial infarction, although it is not well validated for this purpose.3,4

What are the causes of serum troponin elevation?

Unlike the earlier creatine kinase assay, which was not specific to cardiac muscle, troponins are structural proteins unique to cardiac myocytes, and any elevation represents cardiac muscle injury or necrosis. Most cardiac troponin is attached to the myofilaments, but about 5% is free in the cytosol. In acute myocardial infarction or following cardiac trauma, there is disruption of the sarcolemmal membrane of the cardiomyocyte and release of the troponin in the cytoplasmic pool. There is a delay in the appearance of troponin in serum of between 90 and 180 minutes,57 which means there is a requirement for serial testing of troponin levels in hospital emergency departments. Later, there is a prolonged release of troponin from the degradation of myofilaments over 10–14 days.

It is now clear that troponin may also be released under conditions of myocardial stress without cellular necrosis (including tachyarrhythmia, prolonged exercise, sepsis, hypotension or hypertensive crisis and pulmonary embolism)8,9 (Box 1), probably through the mechanism of stress-induced myocyte bleb formation10 and release of a small portion of the cytoplasmic troponin pool. Elevations of troponin seen in this context are sometimes erroneously referred to as “false positives”; this is incorrect because any troponin elevation is truly abnormal and is prognostic in many clinical states outside of the acute coronary syndrome.11

The serum troponin assay was designed to screen patients for spontaneous, usually atherothrombotic, myocardial infarction, but under the new classification of myocardial infarction (Box 2),12 troponin elevations associated with demand–supply imbalance have led to the new diagnostic category of type 2 myocardial infarction (which is more likely to be associated with reversible or minimal myocardial injury, rather than permanent myocardial necrosis). The prevalence of all types of myocardial infarction, particularly type 2, has been amplified by the new high sensitivity troponin assays. A rise and fall in serum troponin level is required to confirm an acute myocardial infarction, irrespective of the type of troponin assay used. Chronic stable elevations are seen in some conditions (eg, chronic heart failure) where the lack of change over time indicates that an acute process is not present. True instances of false-positive troponin elevation due to calibration errors, heterophile antibodies or interfering substances have been greatly reduced by improved analytical techniques, blocking reagents and the use of antibody fragments.

What is different about the new high sensitivity troponin assays?

The newly developed high sensitivity assays provide reliable detection of very low concentrations of troponin and therefore offer earlier risk stratification of patients with possible acute coronary syndrome (3 hours after an episode of chest pain).7 The high sensitivity assays are also presented in different units (ng/L, rather than the previous μg/L), enabling the reporting of whole numbers (eg, 40 ng/L is equivalent to the earlier assay report of 0.04 μg/L).

By expert consensus, the assay must have a coefficient of variance of < 10% at the 99th percentile value of a reference population,13 which is the cut-off used for elevation. The benefit of the improved precision of the new high sensitivity assays is that even small elevations above this cut-off can be considered a true elevation, rather than an artefact of the assay. Examples of cut-off for elevation (> 99th percentile of a reference population) include a high sensitivity troponin T (hsTnT; Roche Elecsys) level of 14 ng/L, and a high sensitivity troponin I (hsTnI; Abbott Architect) level of 26 ng/L (these values may differ between pathology laboratories). It has been suggested that sex-specific cut-off values should be provided,12 and, in Australia, laboratories reporting the hsTnI assay often use these differing cut-offs (female, 16 ng/L; male, 26 ng/L).

A study in an Australian hospital found that use of the high sensitivity assays was associated with significantly earlier diagnosis and less time spent in the emergency department, but did not change the revascularisation rate or reduce mortality.14 A recent meta-analysis demonstrated that about 5% of an asymptomatic community population had an elevated serum troponin level when tested using a high sensitivity assay,11 clearly different to the reference population (screened to exclude comorbidities) that was used to derive the assay cut-off. Even in this asymptomatic cohort, an elevated troponin level had prognostic significance and was associated with a threefold greater risk of adverse cardiac outcomes compared with people with normal troponin levels. This reflects a greater hazard than identified previously for those with elevated cholesterol (risk ratio [RR], 1.9) or diabetes, (RR, 1.7) or even from smoking (RR, 1.68).15

As older patients (aged ≥ 65 years) have a high prevalence of elevated troponin levels, a higher troponin cut-off has been proposed for this group.16,17 More than 50% of patients with heart failure have elevated high sensitivity troponin levels, and the level is correlated with prognosis.18 It has also been shown in a large cohort of patients with chronic atrial fibrillation who were taking anticoagulant therapy19 that troponin elevation was independently related to the long term risk of cardiovascular events and cardiac death.

When should a general practitioner measure serum troponin and what should be done if a high serum troponin level is found?

Patients who present with a history of a possible acute coronary syndrome, but have been symptom-free for between 24 hours and 14 days previously, and who have no high risk features (ongoing or recurrent pain, syncope, heart failure, abnormal ECG) could be assessed with a single serum troponin test. If patients have had ongoing symptoms within the preceding 24 hours, they should be referred immediately to an emergency department for assessment.20 For patients in whom a single troponin test is appropriate, the test should be labelled as urgent and, as the result has prognostic implications and may require an urgent action plan, a system must be in place to ensure medical notification of the result at any hour of the day or night. In this clinical context, even a small elevation in serum troponin level may indicate an acute coronary syndrome during the preceding 2 weeks, warranting urgent cardiac assessment and hospital referral.20 However, a negative serum troponin result in the absence of high risk features does not exclude a diagnosis of unstable angina, and urgent cardiac assessment would still be appropriate if the presenting symptoms are severe or repetitive.

When should a general practitioner not measure serum troponin?

Patients presenting with a possible acute coronary syndrome with symptoms occurring within the preceding 24 hours, or with possible acute coronary syndrome more than 24 hours previously and with high risk features such as heart failure, syncope or an abnormal ECG, require further investigations.20 These may include urgent angiography, serial troponin testing and further ECGs in a monitored environment where emergency reperfusion treatments are available. These patients should be referred and transported to a hospital emergency department by ambulance, as it is not appropriate to perform serial troponin testing of high risk patients in a community setting.20 High risk ECG abnormalities include tachyarrhythmia or bradyarrhythmia, any ST deviation, deep T wave inversion or left bundle branch block. Serial troponin testing is required to confirm a diagnosis of myocardial infarction, and these patients may require fibrinolysis or urgent angiography and revascularisation.

Measurement of troponin in asymptomatic people is not currently recommended as the result may be problematic, with multiple possible causes and no clearly effective investigative strategies or therapies, and has to be interpreted with respect to the entire clinical context.

Case reports of appropriate and inappropriate use of troponin testing

Patient 1

A 72-year-old woman with type 2 diabetes tells you that she had 2 hours of chest tightness 4 days ago, but has been feeling well since then. Her physical examination is unremarkable, and you think her ECG is normal. You arrange for her to have an urgent serum troponin test, and the result is significantly elevated (hsTnI, 460 ng/L; female reference interval [RI], < 16 ng/L). You call a cardiologist, who arranges her immediate admission to hospital. Echocardiography shows hypokinesis of the anterior wall and apex and a left ventricular ejection fraction of 48%. Angiography shows a severe proximal left anterior descending artery lesion, which is treated with coronary stenting, and minor disease of the other arteries. She is discharged and has a good outcome.

Comment

In this setting, measurement of troponin is reasonable, as her symptoms occurred 4 days previously and she has had no further symptoms and has no high risk features.

Patient 2

A 68-year-old man presents to your surgery with a history of severe chest tightness lasting for 2 hours that morning. It has now resolved and he is pain-free 5 hours later. He has no major cardiovascular risk factors and his physical examination and ECG are normal. You do not order any other tests and arrange ambulance transport to a hospital emergency department. Testing at the hospital shows that his hsTnI level is elevated (84 ng/L; male RI, < 26 ng/L), and angiography shows severe left main coronary artery disease. He undergoes coronary revascularisation and has a good outcome.

Comment

This patient has had possible acute ischaemic symptoms within the past 24 hours. Troponin testing in a general practice setting should therefore not be performed, and the actions taken in sending this patient for urgent assessment are appropriate.

Patient 3

A 62-year-old man with no relevant past medical history presents with a history of several episodes in the past week of dull central chest pain lasting 5–10 minutes; the latest episode was 3 days ago. His physical examination and ECG are considered normal. An urgent serum troponin assay is performed and the result is normal (hsTnI, 3 ng/L; male RI, < 26 ng/L). You are worried that his clinical presentation may still be consistent with unstable angina. You contact a cardiologist, who arranges a stress echocardiogram the following day, which is strongly positive. The patient is admitted and is found to have severe three-vessel coronary artery disease. He undergoes revascularisation, with a good outcome.

Comment

This patient presents with symptoms suggestive of unstable angina. In this setting, irrespective of any troponin values, further urgent assessment is required.

Patient 4

A 52-year-old obese man with controlled hypertension has had multiple episodes in the past 12 months of prolonged retrosternal burning pain. These have often lasted several hours and are particularly worse after meals and when recumbent. He has had no symptoms for the past 4 days. His physical examination and ECG are normal. A serum troponin test result is normal. You arrange a stress echocardiogram, which is normal, and an upper gastrointestinal endoscopy, which shows severe reflux oesophagitis. He commences taking proton pump inhibitors and has good control of his symptoms.

Comment

The symptoms of cardiac ischaemia are often atypical. In the absence of recent symptoms, consideration of a cardiac cause of this patient’s presentation is essential and, in the context of this case, a single troponin test is appropriate.

Patient 5

A 58-year-old formerly well woman presents to you immediately after a 1-hour episode of burning central chest discomfort, which resolved spontaneously. She has experienced minor chest pain episodically for the past 3 days. Her physical examination and ECG are normal. It is 7 pm; you order a serum troponin test and give her a referral for an upper gastrointestinal endoscopy. As you leave the surgery, you turn off your mobile phone so that you will not be interrupted, as you are going to the cinema. When you turn your phone on later that evening, you have two messages. The first message tells you that the troponin test result showed an elevated level (hsTnI, 43 ng/L; female RI, < 16 ng/L). The second message is from your patient’s husband, who says your patient developed severe chest pain at home and they were uncertain what to do. Upon calling her husband, he tearfully says that she had a cardiac arrest at home and did not survive.

Comment

A number of concerns arise in this case. First, the troponin test should not have been ordered as there was a significant clinical suspicion of an acute coronary syndrome and, with symptoms within the past 24 hours, the patient is considered potentially at high risk and should have been urgently referred to hospital, where serial ECGs, troponin testing and risk stratification could be performed in the safety of a fully equipped emergency department. Second, whenever troponin testing is used, systems must be in place for the result to be conveyed urgently to the medical practitioner21 and appropriate action taken.

Conclusions

Acute coronary syndrome remains a major cause of death and long term morbidity. For patients presenting to a general practice with possible acute coronary syndrome within the preceding 24 hours, including symptoms consistent with either unstable angina or high risk clinical features, a serum troponin test should not be ordered. Instead, these patients should be referred to an emergency department for evaluation in a monitored environment capable of offering defibrillation, urgent fibrinolysis or revascularisation. However, patients presenting with ischaemic symptoms that occurred more than 24 hours previously, who are now symptom-free and have no high risk features, may be assessed with a single troponin assay and referred urgently to hospital if the result is elevated. If the troponin result is negative, unstable angina is not excluded and urgent or semi-urgent cardiac referral may still be appropriate, depending on the timing and severity of symptoms. When troponin assays are used, systems must be in place for the result to be conveyed urgently to a medical practitioner so that appropriate action may be taken.

Future directions

Further refinement of strategies that use high sensitivity troponin assays may improve upon the current 3-hour rule-out time for acute myocardial infarction. Other methods of early risk stratification, including imaging techniques, are currently being evaluated. In the future, troponin levels may also prove to be useful in many clinical contexts, including gauging cardiotoxicity with chemotherapeutic agents, identifying cardiac allograft rejection or monitoring patients with heart failure. In addition, there is potential for troponin testing to be included in newer models of general cardiovascular risk stratification, but until further evaluation in prospective trials demonstrates a clinical benefit, troponin should not be measured in asymptomatic individuals.

Box 1 –
Causes of serum troponin level elevation

  • Acute myocardial infarction (see )
  • Coronary artery spasm (eg, due to cocaine or methamphetamine use)
  • Takotsubo cardiomyopathy
  • Coronary vasculitis (eg, systemic lupus erythematosus, Kawasaki disease)
  • Acute or chronic heart failure
  • Tachyarrhythmia or bradyarrhythmia
  • Frequent defibrillator shocks
  • Cardiac contusion or surgery
  • Rhabdomyolysis with cardiac involvement
  • Myocarditis or infiltrative diseases (eg, amyloidosis, sarcoidosis, haemochromatosis)
  • Cardiac allograft rejection
  • Hypertrophic cardiomyopathy
  • Cardiotoxic agents (eg, anthracyclines, trastuzumab, carbon monoxide poisoning)
  • Aortic dissection or severe aortic valve disease
  • Severe hypotension or hypertension (eg, haemorrhagic shock, hypertensive emergency)
  • Severe pulmonary embolism, pulmonary hypertension or respiratory failure
  • Dialysis-dependent renal failure
  • Severe burns affecting > 30% of the body surface
  • Severe acute neurological conditions (eg, stroke, cerebral bleeding or trauma)
  • Sepsis
  • Prolonged exercise or extreme exertion (eg, marathon running)

Box 2 –
The new classification of myocardial infarction (MI)12

Type

Clinical situation

Definition


1

Spontaneous

MI related to ischaemia from primary coronary event such as plaque rupture, erosion, fissuring or dissection

2

Demand–supply imbalance

MI related to secondary ischaemia due to myocardial oxygen supply–demand imbalance such as spasm, anaemia, hypotension or arrhythmia

3

Sudden death

Unexpected cardiac death, perhaps suggestive of MI, but occurring before blood samples can be obtained

4a

PCI

MI associated with PCI procedure

4b

Stent thrombosis

MI associated with stent thrombosis, as seen on angiography or autopsy

5

CABG

MI associated with CABG


CABG = coronary artery bypass grafting. PCI = percutaneous coronary intervention.

Guideline for the diagnosis and management of hypertension in adults — 2016

Blood pressure (BP) is an important common modifiable risk factor for cardiovascular disease. In 2014–15, 6 million adult Australians were hypertensive (BP ≥ 140/90 mmHg) or were taking BP-lowering medication.1 Hypertension is more common in those with lower household incomes and in regional areas of Australia (http://heartfoundation.org.au/about-us/what-we-do/heart-disease-in-australia/high-blood-pressure-statistics). Many Australians have untreated hypertension, including a significant proportion of Aboriginal and Torres Strait Islander people.1

Cardiovascular diseases are associated with a high level of health care expenditure.2 Controlled BP is associated with lower risks of stroke, coronary heart disease, chronic kidney disease, heart failure and death. Small reductions in BP (1–2 mmHg) are known to markedly reduce population cardiovascular morbidity and mortality.3,4

Method

The National Blood Pressure and Vascular Disease Advisory Committee, an expert committee of the National Heart Foundation of Australia, has updated the Guide to management of hypertension 2008: assessing and managing raised blood pressure in adults (last updated in 2010)5 to equip health professionals across the Australian health care system, especially those within primary care and community services, with the latest evidence to prevent, detect and manage hypertension.

International hypertension guidelines68 were reviewed to identify key areas for review. Review questions were developed using the patient problem or population, intervention, comparison and outcome(s) (PICO) framework.9 Systematic literature searches (2010–2014) of MEDLINE, Embase, CINAHL and the Cochrane Library were conducted by an external organisation, and the resulting evidence summaries informed the updated clinical recommendations. The committee also reviewed additional key literature relevant to the PICO framework up to December 2015.

Recommendations were based on high quality studies, with priority given to large systematic reviews and randomised controlled trials, and consideration of other studies where appropriate. Public consultation occurred during the development of the updated guideline. The 2016 update includes the level of evidence and strength of recommendation in accordance with National Health and Medical Research Council standards10 and the Grading of Recommendations Assessment, Development and Evaluation (GRADE) methodology.11 No level of evidence has been included where there was no direct evidence for a recommendation that the guideline developers agreed clearly outweighed any potential for harm.

Most of the major recommendations from the guideline are outlined below, together with background information and explanation, particularly in areas of change in practice. Key changes from the previous guideline are listed in Box 1. The full Heart Foundation Guideline for the diagnosis and management of hypertension in adults – 2016 is available at http://heartfoundation.org.au/for-professionals/clinical-information/hypertension. The full guideline contains additional recommendations in the areas of antiplatelet therapy, suspected BP variability, and initiating treatment using combination therapy compared with monotherapy.

Recommendations

Definition and classification of hypertension

Elevated BP is an established risk factor for cardiovascular disease. The relationship between BP level and cardiovascular risk is continuous, therefore the distinction between normotension and hypertension is arbitrary.12,13 Cut-off values are used for diagnosis and management decisions but vary between international guidelines. Current values for categorisation of clinic BP in Australian adults are outlined in Box 2.

Management of patients with hypertension should also consider absolute cardiovascular disease risk (where eligible for assessment) and/or evidence of end-organ damage. Several tools exist to estimate absolute cardiovascular disease risk. The National Vascular Disease Prevention Alliance developed a calculator for the Australian population, which can be found at http://www.cvdcheck.org.au.

Treatment strategies for individuals at high risk of a cardiovascular event may differ from those at low absolute cardiovascular disease risk despite similar BP readings. It is important to note that the absolute risk calculator has been developed using clinic BP, rather than ambulatory, automated office or home BP measures.

Some people are not suitable for an absolute risk assessment, including younger patients with uncomplicated hypertension and those with conditions that identify them as already at high risk.14

Blood pressure measurement

A comprehensive assessment of BP should be based on multiple measurements taken on several separate occasions. A variety of methods are available, each providing different but often complementary information. Methods include clinic BP, 24-hour ambulatory and home BP monitoring (Box 3).

Most clinical studies demonstrating effectiveness and benefits of treating hypertension have used clinic BP. Clinic, home and ambulatory BP all predict the risk of a cardiovascular event; however, home and ambulatory blood pressure measures are stronger predictors of adverse cardiovascular outcomes (Box 4).15,16

Automated office BP measurement involves taking repeated blood pressure measurements using an automated device with the clinician out of the room.17,18 This technique generally yields lower readings than conventional clinic BP and has been shown to have a good correlation with out-of-clinic measures.

The British Hypertension Society provides a list of validated BP monitoring devices.19 Use of validated and regularly maintained non-mercury devices is recommended as mercury sphygmomanometers are being phased out for occupational health and safety and environmental reasons.

Treatment thresholds

Although the benefits of lowering BP in patients with significantly elevated BP have been well established, the benefit of initiating drug therapy in patients with lower BP with or without comorbidities has been less certain. A meta-analysis of patients with uncomplicated mild hypertension (systolic BP range, 140–159 mmHg) indicated beneficial cardiovascular effects with reductions in stroke, cardiovascular death and all-cause mortality, through treatment with BP-lowering therapy.20 Corresponding relative reductions in 5-year cardiovascular disease risk were similar for all levels of baseline BP.21

Decisions to initiate drug treatment at less severe levels of BP elevations should consider a patient’s absolute cardiovascular disease risk and/or evidence of end-organ damage together with accurate blood pressure readings.

Treatment targets

Optimal blood pressure treatment targets have been debated extensively. There is emerging evidence demonstrating the benefits of treating to optimal BP, particularly among patients at high cardiovascular risk.17,20

The recent Systolic Blood Pressure Intervention Trial investigated the effect of targeting a higher systolic BP level (< 140 mmHg) compared with a lower level (< 120 mmHg) in people over the age of 50 years who were identified as having a cardiovascular 10-year risk of at least 20%.17 Many had prior cardiovascular events or mild to moderate renal impairment and most were already on BP-lowering therapy at the commencement of the study. Patients with diabetes, cardiac failure, severe renal impairment or previous stroke were excluded. The method of measurement was automated office BP,18 a technique that generally yields lower readings than conventional clinic BP. Patients treated to the lower target achieved a mean systolic BP of 121.4 mmHg and had significantly fewer cardiovascular events and lower all-cause mortality compared with the other treatment group, which achieved a mean systolic level of 136.2 mmHg. Older patients (> 75 years) benefited equally from the lower target BP. However, treatment-related adverse events increased in the more intensively treated patients, with more frequent hypotension, syncopal episodes, acute kidney injury and electrolyte abnormalities.

The selection of a BP target should be based on an informed, shared decision-making process between patient and doctor (or health care provider), considering the benefits and harms and reviewed on an ongoing basis.

Recommendations for treatment strategies and treatment targets for patients with hypertension are set out in Box 5.

Box 1 –
Key changes from previous guideline

  • Use of validated non-mercury sphygmomanometers that are regularly maintained is recommended for blood pressure (BP) measurement.
  • Out-of-clinic BP using home or 24-hour ambulatory measurement is a stronger predictor of outcome than clinic BP measurement.
  • Automated office blood pressure (AOBP) provides similar measures to home and ambulatory BP, and results are generally lower than those from conventional clinic BP measurement.
  • BP-lowering therapy is beneficial (reduced stroke, cardiovascular death and all-cause mortality) for patients with uncomplicated mild hypertension (systolic BP, 140–159 mmHg).
  • For patients with at least moderate cardiovascular risk (10-year risk, 20%), lower BP targets of < 120 mmHg systolic (using AOBP) provide benefit with some increase in treatment-related adverse effects.
  • Selection of a BP target should be based on informed, shared decision making between patients and health care providers considering the benefits and harms, and reviewed on an ongoing basis.

Box 2 –
Classification of clinic blood pressure in adults

Diagnostic category*

Systolic (mmHg)

Diastolic (mmHg)


Optimal

< 120

and

< 80

Normal

120–129

and/or

80–84

High-normal

130–139

and/or

85–89

Grade 1 (mild) hypertension

140–159

and/or

90–99

Grade 2 (moderate) hypertension

160–179

and/or

100–109

Grade 3 (severe) hypertension

≥ 180

and/or

≥ 110

Isolated systolic hypertension

> 140

and

< 90


Reproduced with permission from the National Heart Foundation of Australia. Guideline for the diagnosis and management of hypertension in adults — 2016. Melbourne: NHFA, 2016. * When a patient’s systolic and diastolic blood pressure levels fall into different categories, the higher diagnostic category and recommended actions apply.

Box 3 –
Criteria for diagnosis of hypertension using different methods of measurement

Method of measurement

Systolic (mmHg)

Diastolic (mmHg)


Clinic

≥ 140

and/or

≥ 90

ABPM daytime (awake)

≥ 135

and/or

≥ 85

ABPM night-time (asleep)

≥ 120

and/or

≥ 70

ABPM over 24 hours

≥ 130

and/or

≥ 80

HBPM

≥ 135

and/or

≥ 85


Reproduced with permission from the National Heart Foundation of Australia. Guideline for the diagnosis and management of hypertension in adults — 2016. Melbourne: NHFA, 2016. ABPM = ambulatory blood pressure monitoring. HBPM = home blood pressure monitoring.

Box 4 –
Recommendations for monitoring blood pressure (BP) in patients with hypertension or suspected hypertension

Method of measuring BP

Grade of recommendation*

Level of evidence


If clinic BP is ≥ 140/90 mmHg or hypertension is suspected, ambulatory and/or home monitoring should be offered to confirm the BP level

Strong

I

Clinic BP measures are recommended for use in absolute cardiovascular risk calculators. If home or ambulatory BP measures are used in absolute cardiovascular disease risk calculators, risk may be inappropriately underestimated

Strong

Procedures for ambulatory BP monitoring should be adequately explained to patients. Those undertaking home measurements require appropriate training under qualified supervision

Strong

I

Finger and/or wrist BP measuring devices are not recommended

Strong


Reproduced with permission from the National Heart Foundation of Australia. Guideline for the diagnosis and management of hypertension in adults — 2016. Melbourne: NHFA, 2016. * Grading of Recommendations Assessment, Development and Evaluation (GRADE) methodology.11 † National Health and Medical Research Council standards;10 no level of evidence included where there was no direct evidence for a recommendation that the guideline developers agreed clearly outweighed any potential for harm.

Box 5 –Recommendations for treatment strategies and treatment targets for patients with hypertension, with grade of recommendation and level of evidence*

A healthy lifestyle, including not smoking, eating a nutritious diet and regular adequate exercise is recommended for all Australians including those with and without hypertension.

  • Lifestyle advice is recommended for all patients (grade: strong; level: –).
  • For patients at low absolute cardiovascular disease risk (5-year risk, < 10%) with persistent blood pressure (BP) ≥ 160/100 mmHg, antihypertensive therapy should be started (grade: strong; level: I).
  • For patients at moderate absolute cardiovascular disease risk (5-year risk, 10–15%) with persistent systolic BP ≥ 140 mmHg and/or diastolic ≥ 90 mmHg, antihypertensive therapy should be started (grade: strong; level: I).
  • Once decided to treat, patients with uncomplicated hypertension should be treated to a target of < 140/90 mmHg or lower if tolerated (grade: strong; level: I).
  • In selected high cardiovascular risk populations where a more intense treatment can be considered, aiming for a target of < 120 mmHg systolic BP can improve cardiovascular outcomes (grade: strong; level: II).
  • In selected high cardiovascular risk populations where a treatment is being targeted to < 120 mmHg systolic BP, close follow-up of patients is recommended to identify treatment-related adverse effects including hypotension, syncope, electrolyte abnormalities and acute kidney injury (grade: strong; level: II).
  • In patients with uncomplicated hypertension, angiotensin-converting enzyme (ACE) inhibitors or angiotensin-receptor blockers (ARBs), calcium channel blockers and thiazide diuretics are all suitable first-line antihypertensive drugs, either as monotherapy or in some combinations unless contraindicated (grade: strong; level: I).
  • The balance between efficacy and safety is less favourable for β-blockers than other first-line antihypertensive drugs. Thus β-blockers should not be offered as a first-line drug therapy for patients with hypertension that is not complicated by other conditions (grade: strong; level: I).
  • ACE inhibitors and ARBs are not recommended in combination due to an increased risk of adverse effects (grade: strong; level: I).

Treatment-resistant hypertension

Treatment-resistant hypertension is defined as a systolic BP ≥ 140 mmHg in a patient who is taking three or more antihypertensive medications, including a diuretic at optimal tolerated doses. Contributing factors may include variable compliance, white coat hypertension or secondary causes for hypertension.Few drug therapies specifically target resistant hypertension. Renal denervation is currently being investigated as a treatment option in this condition; however, to date, it has not been found to be effective in the most rigorous study conducted.22

  • Optimal medical management (with a focus on treatment adherence and excluding secondary causes) is recommended (grade: strong; level: II).
  • Percutaneous transluminal radiofrequency sympathetic denervation of the renal artery is currently not recommended for the clinical management of resistant hypertension or lower grades of hypertension (grade: weak; level: II).

Patients with hypertension and selected comorbidities

Stroke and transient ischaemic attack:

  • For patients with a history of transient ischaemic attacks or stroke, antihypertensive therapy is recommended to reduce overall cardiovascular risk (grade: strong; level: I).
  • For patients with a history of transient ischaemic attacks or stroke, any of the first-line antihypertensive drugs that effectively reduce BP are recommended (grade: strong; level: I).
  • For patients with hypertension and a history of transient ischaemic attacks or stroke, a BP target of < 140/90 mmHg is recommended (grade: strong; level: I).

Chronic kidney disease:

Most classes of BP-lowering drugs have a similar effect in reducing cardiovascular events and all-cause mortality in patients with chronic kidney disease (CKD). When treating with diuretics, the choice should be dependent on both the stage of CKD and the extracellular fluid volume overload in the patient. Detailed recommendations on how to manage patients with CKD are available.23

  • In patients with hypertension and CKD, any of the first-line antihypertensive drugs that effectively reduce BP are recommended (grade: strong; level: I).
  • When treating hypertension in patients with CKD in the presence of microalbuminuria or macroalbuminuria, an ARB or ACE inhibitor should be considered as first-line therapy (grade: strong; level: I).
  • In patients with CKD, antihypertensive therapy should be started in those with BP consistently > 140/90 mmHg and treated to a target of < 140/90 mmHg (grade: strong; level: I).
  • Dual renin-angiotensin system blockade is not recommended in patients with CKD (grade: strong; level: I).
  • For patients with CKD, aiming towards a systolic BP < 120 mmHg has shown benefit, where well tolerated (grade: strong; level: II).
  • In people with CKD, where treatment is being targeted to less than 120 mmHg systolic BP, close follow-up of patients is recommended to identify treatment-related adverse effects, including hypotension, syncope, electrolyte abnormalities and acute kidney injury (grade: strong; level: I).
  • In patients with CKD, aldosterone antagonists should be used with caution in view of the uncertain balance of risks versus benefits (grade: weak; level: –).

Diabetes:

  • Antihypertensive therapy is strongly recommended in patients with diabetes and systolic BP ≥ 140 mmHg (grade: strong; level: I).
  • In patients with diabetes and hypertension, any of the first-line antihypertensive drugs that effectively lower BP are recommended (grade: strong; level: I).
  • In patients with diabetes and hypertension, a BP target of < 140/90 mmHg is recommended (grade: strong; level: I).
  • A systolic BP target of < 120 mmHg may be considered for patients with diabetes in whom prevention of stroke is prioritised (grade: weak; level: –).
  • In patients with diabetes, where treatment is being targeted to < 120 mmHg systolic BP, close follow-up of patients is recommended to identify treatment-related adverse effects including hypotension, syncope, electrolyte abnormalities and acute kidney injury (grade: strong; level: –).

Myocardial infarction:

  • For patients with a history of myocardial infarction, ACE inhibitors and β-blockers are recommended for the treatment of hypertension and secondary prevention (grade: strong; level: II).
  • β-Blockers or calcium channel blockers are recommended for symptomatic patients with angina (grade: strong; level: II).

Chronic heart failure:

  • In patients with chronic heart failure, ACE inhibitors and selected β-blockers are recommended (grade: strong; level: II).
  • ARBs are recommended in patients who do not tolerate ACE inhibitors (grade: strong; level: I).

Peripheral arterial disease:

  • In patients with peripheral arterial disease, treating hypertension is recommended to reduce cardiovascular disease risk (grade: strong; level: –).
  • In patients with hypertension and peripheral arterial disease, any of the first-line antihypertensive drugs that effectively reduce BP are recommended (grade: weak; level: –).
  • In patients with hypertension and peripheral arterial disease, reducing BP to a target of < 140/90 mmHg should be considered and treatment guided by effective management of other symptoms and contraindications (grade: strong; level: –).

Older people:

  • Any of the first-line antihypertensive drugs that effectively reduce BP can be used in older patients with hypertension (grade: strong; level: I).
  • When starting treatment in older patients, drugs should be commenced at the lowest dose and titrated slowly as adverse effects increase with age (grade: strong; level: –).
  • For patients > 75 years of age, aiming towards a systolic BP of < 120 mmHg has shown benefit, where well tolerated, unless there is concomitant diabetes (grade: strong; level: II).
  • In older people whose treatment is being targeted to < 120 mmHg systolic BP, close follow-up is recommended to identify treatment-related adverse effects including hypotension, syncope, electrolyte abnormalities and acute kidney injury (grade: strong; level: II).
  • Clinical judgement should be used to assess benefit of treatment against risk of adverse effects in all older patients with lower grades of hypertension (grade: strong; level: –).

Adapted with permission from the National Heart Foundation of Australia. Guideline for the diagnosis and management of hypertension in adults – 2016. Melbourne: NHFA, 2016. * Grade of recommendation based on the Grading of Recommendations Assessment, Development and Evaluation (GRADE) methodology;11 level of evidence according to the National Health and Medical Research Council standards10 — no level of evidence included where there was no direct evidence for a recommendation that the guideline developers agreed clearly outweighed any potential for harm.

Relieving the pressure: new Australian hypertension guideline

The National Heart Foundation guideline has been updated to reflect recent evidence and Australian conditions

Some might think there are too many guidelines on hypertension.14 However, given that hypertension is a major risk factor for premature death and disability from cardiovascular disease in Australia and globally,5,6 practitioners need a practical, contemporary and localised guide to best practice. Most countries and their hypertension societies publish their own guidelines. However, these differ for a number of reasons. The data available at the particular time of publication vary. Experts interpret these data differently. To varying extents, guideline development is subject to vested interests, such as governments and other funders wishing to keep down costs, or industry wishing to make treatments available to the widest groups possible.7 The scope of the review can vary from people with uncomplicated hypertension to those with a broad range of complications and comorbidities.

In Australia, we are fortunate that the National Heart Foundation has produced an excellent new guideline (http://heartfoundation.org.au/for-professionals/clinical-information/hypertension) adapted for Australian conditions at a time when knowledge of the field has been moving rapidly.8

One of the first questions concerns who is at risk from hypertension and who warrants active treatment. For some time now, Australian recommendations have followed an absolute risk approach, rather than determining risk on blood pressure alone. The recommended way of assessing this is according to the national guideline on absolute risk developed by the National Vascular Disease Prevention Alliance.9 This predicts risk of a major cardiovascular event or death in the next 5 years based on a modified Framingham equation. Unless blood pressure is very high, it is a better way of identifying someone who will benefit from treatment than blood pressure alone as a single risk factor. It also helps deal with the well known epidemiological paradox that most people who have heart attacks or strokes caused by elevated blood pressure do not meet the conventional definition of hypertension. Systolic blood pressure in the high normal range (130/85–139/89 mmHg) carries risk and this includes a substantial proportion of the population. This applies equally to other risk factors such as blood glucose or lipids. Levels below arbitrary cut-points capture only some of the relevant risk.

However, risk assessed this way is very much influenced by age and is not applicable in young adults (aged under 45 years, or under 35 years in Indigenous Australians) or older adults (aged over 74 years).9

Recent hypertension trials and other guidelines have used several methods of assessing risk, including the presence or absence of target organ damage, clinical indicators such as abdominal girth, tobacco use, low levels of high density lipoprotein.10 What can be concluded is that there are many ways of assessing risk and, if risk is high, people benefit from modification of standard risk factors including blood pressure.

The 2016 guideline offers advice on new areas including out-of-clinic blood pressure measurement using ambulatory or home procedures, white coat hypertension and blood pressure variability. It includes updated evidence on the management of hypertension with comorbidities including chronic kidney disease, diabetes and peripheral arterial disease. There are minor updates to recommendations on first-line and combination pharmacotherapies.

A key area of debate surrounds the revision of treatment targets based on new evidence for a target blood pressure of < 120 mmHg in patients at high risk for cardiovascular events but without diabetes.11 Where supported by evidence, a target of < 140 mmHg is recommended. In other groups where there is supporting evidence, the recommendation is to aim for < 120 mmHg but be cautious about adverse events, especially in older, frail people. The debate is fuelled by the lack of data in particular patient groups but increasingly the clinical trial data are aligning with the epidemiology, with lower blood pressure being associated with better outcomes across the spectrum.12

The holes in the evidence base and shifting ground as new evidence comes forward are not helpful to the individual clinician who wants advice on what to do for a particular patient on a particular day. Further, we know that despite the proliferation of guidelines, most people with hypertension do not achieve the goals of their therapy, irrespective of what country they live in and what guideline is being followed. In future, we need to move the emphasis from large tomes written by expert groups to providing decision support individualised to the patient.

The new guideline also covers emerging evidence on diagnostic and therapeutic aspects of hypertension, including the uncertain role of renal denervation. It will guide management of hypertension in Australia for the immediate future. I commend it to readers of the MJA and thank all those who contributed to its preparation.