×

Vitamin D and tuberculosis: hope or hype?

It may be worthwhile to test for and treat vitamin D deficiency in latent infection, but not in active TB

In this issue of the Journal, MacLachlan and Cowie advocate increased testing of vitamin D (serum
25-hydroxyvitamin D [25-OHD]) for people with risk factors for vitamin D deficiency and tuberculosis (TB).1 This pertinent suggestion is based on the assumption that vitamin D deficiency is a risk factor for progression from latent to active TB, and that correction of deficiency could reduce this risk (the demonstration by MacLachlan and colleagues2 of TB seasonality in Australia is consistent
with this hypothesis). The proposal presents a timely opportunity to scrutinise evidence of an association between vitamin D deficiency and TB, temper the high hopes that vitamin D might be an important adjunctive treatment for active TB, and remind clinicians about problems with testing and interpreting 25-OHD levels.

Many communicable diseases are seasonal — for example, influenza, rotavirus and TB. But many potential risk factors are also seasonal — temperature, time spent indoors, household crowding, humidity, ultraviolet radiation (which has immunological effects independent of vitamin D),3 incidence of co-infections, and, potentially, nutritional intake. An important maxim to remember when interpreting the literature showing associations between vitamin D and seasonality — and a wide range
of conditions from cancer to cardiovascular disease, schizophrenia to bacterial vaginosis — is that, as MacLachlan and Cowie point out, correlation does
not imply causation.1

We know that serum 25-OHD levels are low in active TB.3 What might be the explanation? One hypothesis
is that it is appropriately low due to conversion to
1,25-dihydroxyvitamin D (activated vitamin D [calcitriol]). Indeed, calcitriol, an important factor in human innate antimycobacterial immunological responses,4,5 has
been found to be elevated in active TB.6 Also, 25-OHD concentration can spontaneously recover over time with TB treatment,3 but has been shown to fall during TB immune restoration syndrome,7 suggesting that low
25-OHD concentration could be a consequence of immunological activation (a negative acute phase reactant). Further data are needed; there may be
many factors accounting for low 25-OHD levels in
active TB, and the relationship could be bidirectional.

There is increasing evidence that in high TB-burden settings — using doses considered safe for programmatic deployment where calcium, 25-OHD or calcitriol levels cannot readily be measured — vitamin D supplementation in active TB is not beneficial for TB outcomes.8,9 The situation may be different in well resourced settings, where management of TB and underlying conditions (HIV, diabetes and, possibly, vitamin D deficiency) can be individually tailored. Using high vitamin D doses in a well resourced, monitored setting is generally safe10 (but not always)11 and may confer more rapid sputum smear conversion time.12 In a small subset of TB patients with
a specific vitamin D receptor genotype and low mean baseline 25-OHD concentration, it may be associated
with faster sputum culture conversion.10 Two other trials
of vitamin D supplementation for active TB have been published, but methodological issues impair the ability
to draw firm conclusions.13,14 A further article has been submitted for publication. A meta-analysis is needed —
it would probably conclude that supplementary vitamin D is not helpful in active TB overall, but may benefit some outcomes in a selected minority of patients. However, treating vitamin D deficiency in patients with TB may be relevant for non-TB end points.

Contrastingly, MacLachlan and Cowie are advocating a test-and-treat strategy for vitamin D status before active TB development. This makes immunological sense.4,5 The single prospective study examining risk of progression to active TB after exposure found that seven of 30 people with serum 25-OHD levels < 17 nmol/L developed TB, but that only one of 64 people with 25-OHD levels ≥ 17 nmol/L did so.15 Other factors may have explained both the profoundly low 25-OHD levels and increased TB risk; nevertheless, in light of the accumulating evidence, maintenance of latency appears to be the most appropriate stage of infection to target with a vitamin D intervention.16 However, since the 25-OHD concentration associated with TB reactivation in the above study was quite low (< 17 nmol/L), the impact of correcting all cases of vitamin D deficiency (< 50 nmol/L) on TB incidence at the population level may be small. A prospective study
of vitamin D replacement in latent TB would help in improving the evidence base for the association between vitamin D deficiency and TB. However, clinical trials in this field are challenging to conduct, due to the large sample sizes required, the fact that tests for latent TB (relying on immunological responses) may themselves be influenced by vitamin D status, and because correction of deficiency would need to show benefits over and above recognised latent TB treatments (eg, isoniazid preventive therapy and HIV treatment). This raises the question of whether we need to await randomised controlled trial evidence before making a public health recommendation. Given the potential benefits of correction of vitamin D deficiency independent of TB risk, MacLachlan and Cowie’s recommendation seems reasonable.

Notably, promotion of 25-OHD testing coincides with the Royal College of Pathologists of Australasia calling for restraint.17 They recommend measurement of 25-OHD in people with listed risks or clinical/laboratory evidence of deficiency only,17 as MacLachlan and Cowie advocate. A reason for restraint in ordering a 25-OHD test is that many widely used automated assays can lack accuracy and reproducibility.3,17 Clinicians need to understand the test’s limitations, including that different methods may give different results.17 Liquid chromatography–tandem mass spectrometry has better performance characteristics, but is less widely available and results are user-dependent.

It is becoming popular to routinely test 25-OHD in people with active TB. Based on the information above, this is misplaced, since not all patients with TB have vitamin D deficiency risk factors, there is potential for spontaneous recovery of low 25-OHD levels in active TB, and correction of low 25-OHD levels appears to be non-beneficial for TB outcomes.

Finally, to address the contentious issue of vitamin D reference intervals — these are unusual among laboratory assays, being no longer based on normal population data. The lower limit of normal has gradually crept up from 25 nmol/L to 75 nmol/L or higher.3 Inevitably, increasing proportions of the population are therefore now “deficient”. Although this move is motivated by concerns for bone health, the appropriateness of such targets requires scrutiny. Recent reviews conclude that most (> 80%) of the potential benefits of vitamin D for a range of diseases are achieved with 25-OHD levels around 50 nmol/L, with only marginal additional gains for higher levels.18 Further, there appears to be an upper safety limit which, although still poorly defined, should be recognised. U-shaped curves of disease risk in relation to 25-OHD concentration are reported for mortality (cancer, cardiovascular and all-cause) and active TB likelihood, with risks increasing at 25-OHD levels above 80–140 nmol/L.1921 Popular advice promoting high target levels (eg, 125 nmol/L22) is unhelpful and potentially risky.

In summary, we support appropriately targeted testing and treatment of vitamin D deficiency, bearing in mind assay limitations and the implausibility of some proposed 25-OHD targets. Any effect that this strategy would have on risk of progression from latent to active TB remains hypothetical. Given that vitamin D deficiency is an easily preventable problem requiring a high index of suspicion, it is sensible to promote testing in at-risk groups. Correcting deficiency after the horse has bolted (after development of active TB) appears to be too late to have appreciable effects on TB outcome. 25-OHD deficiency detected at the time of active TB diagnosis may be self-limiting; a better assessment of vitamin D status might be gained by deferring testing for 2 months after treatment initiation, when the inflammatory milieu has subsided. More prospective studies of TB-exposed people would help answer the persisting question of the relationship between vitamin D deficiency and failure of latency. However, ethical considerations generally prohibit such studies if they exclude interventions to correct vitamin D deficiency or treat latent TB. In the meantime, further exploration of why 25-OHD levels are low in people with active TB will help answer this question.

Reports indicate that changes are needed to close the gap for Indigenous health

Major changes in health services are needed to redress health disparities

Two recently released reports from the Australian Institute of Health and Welfare (AIHW) make it clear that there must be major changes in the way health services for Indigenous Australians are delivered and funded if we are to improve Indigenous health and health care and ensure real returns on the substantial investments that are being made.1,2

These reports show Australia’s level of financial commitment to Indigenous health. In the 2010–11 financial year total spending on Indigenous health was $4.552 billion,1 almost double that spent in 2004–05. This was $7995 for every Indigenous Australian, compared with $5437 for every non-Indigenous Australian;1 over 90% of this funding came from governments. The surest sign that this money was not well invested in prevention, early intervention and community services is that most of it (on average $3266 per person but $4779 per person in remote areas) was spent on services for patients admitted to hospitals, while spending on Medicare services and medicines subsidised by the Pharmaceutical Benefits Scheme (PBS) on a per-person basis was less than that for non-Indigenous Australians by $198 and $137, respectively.2

The series of AIHW reports since the 1995–96 financial year highlights both where progress has been made and where programs have failed. There have been considerable increases in funding for primary care, acute care and community and public health. The 2010–11 data do not reflect the full implementation of the Indigenous Chronic Disease Health Package, but do suggest that the measure to subsidise PBS copayments for patients with chronic disease is having an effect, specifically in more remote areas where PBS spending is higher than in regional areas.

On the other hand, it is obvious that access to primary care services in remote areas remains limited, and access to referred services such as specialists and diagnostics is poor for Indigenous people everywhere, even in major cities. Per-person spending on non-hospital secondary services is about 57% of that for non-Indigenous people.2 Indigenous Australians receive nearly all their secondary care in hospitals.

The hospital data hammer the story home. In 2010–11, the overall age-standardised separation rate of 911 per 1000 for Indigenous people was 2.5 times that for non-Indigenous people; for people living in the Northern Territory the rate was 7.9 times that for non-Indigenous people.3

About 80% of the difference between these rates was accounted for by separations for Indigenous people admitted for renal dialysis, but further examination highlights how a lack of primary care and prevention services drives increased hospital costs. In 2010–11, total expenditure on potentially preventable hospitalisations for Indigenous Australians was $219 million or $385 per person, compared with $174 per non-Indigenous Australian.3 For all Australians most of this spending is for chronic conditions like complications from diabetes, but, too often, Indigenous Australians are hospitalised for vaccine-preventable conditions like influenza and pneumonia, acute conditions like cellulitis, and injury.

Avoidable hospitalisations are an important indicator of effective and timely access to primary care, and provide a summary measure of health gains from primary care interventions. The inescapable reality is that current primary care interventions are not working.

We know what the problems are. Around two-thirds of the gap in health outcomes between Indigenous Australians and other Australians comes from chronic diseases such as cardiovascular disease, diabetes, respiratory diseases and kidney disease.4 Suicide and transport accidents and other injuries are also leading causes of death.5 Half of the gap in health between Indigenous and non-Indigenous Australians is linked to risk factors such as smoking, obesity and physical inactivity.6 A number of studies have found that between a third and half of the health gap is associated with differences in socioeconomic status such as education, employment and income.7

The 2006 Census (the latest available data) found that 39% of Indigenous people were living in “low resource” households (as defined by the Australian Bureau of Statistics8), almost five times the non-Indigenous rate.9 Such disparities in income limit Indigenous people’s capacity to pay for health care and provide some context for why they are more likely to use public hospitals than privately provided services that require copayments.

There are commitments from all the major stakeholders, political parties and policymakers to close the gap. There is a new National Aboriginal and Torres Strait Islander Health Plan 2013–2023. And, arguably, there are enough funds if these are spent wisely. What is needed is a new approach to how health care is developed for and delivered to Indigenous Australians.

The approach needs to be grounded in three broad principles:

  • Adhering to the principle of “nothing about me without me”.10 Shared decision making must become the norm, with patients and their needs at the centre of a system they drive.

  • Addressing the social determinants of health, in particular, the impact of poverty.

  • Addressing cultural barriers in the way that Indigenous people want.

These are not new ideas and all the right words are in the new national health plan, as they were in the previous strategy document — cross-portfolio efforts, partnership, sustainability, culturally competent services, community, a rights-based approach to providing equal opportunities for health. What we must do is move beyond these fine words to meaningful action.

We have the exemplar of how to do this with Aboriginal Community Controlled Health Organisations (ACCHOs), and we need to (i) provide increased opportunities for engagement, collaboration and service delivery with ACCHOs and (ii) expand this way of working into mainstream services. This will require a different approach to policy development and implementation.11

The key barriers to health care for urban and remote populations alike relate to availability, affordability and acceptability12 and the dominance of biomedical models of health.13 ACCHOs are a practical expression of self-determination in Indigenous health and health service delivery,14 and have been very successful at reducing many of the barriers that inhibit Indigenous access to mainstream primary care.15 Importantly, ACCHOs provide both cultural safety, which allows the patient to feel safe in health care interactions and be involved in changes to health services, and cultural competence, which reflects the capacity of the system to integrate culture into the delivery of health services.16

However, the success of the design and work practices of ACCHOs have had little influence on the mainstream health system17 which remains, necessarily, the source of health care for many Indigenous people. And it can be argued that the current funding and regulatory practices of Australian governments are a heavy burden and consume too much of the scarce resources of ACCHOs in acquiring, managing, reporting and acquitting funding contracts.18

Governments and all stakeholders, including Indigenous people themselves, need to be bold enough to redesign current mainstream health policies, programs and systems to better fit Indigenous health concepts, community needs and culture. This approach should not be seen as radical — it is where we are currently headed with Medicare Locals. We should not ignore the fact that ACCHOs have led the way in developing a model of primary health care services that is able to take account of the social issues and the underlying determinants of health alongside quality care.19 Tackling these reforms will therefore benefit all Australians, but especially those Indigenous people who currently feel disenfranchised. Without real and meaningful change, we are all condemned to more government reports bearing sad, bad news and a continual yawning gap of Indigenous disadvantage.

Think before you insert an intravenous catheter

To the Editor: Peripheral intravenous catheter (PIVC) infection has been highlighted by one of
us (R L S) and colleagues as an important, costly and dangerous complication of PIVCs.1 Infection prevention programs often employ multifaceted interventions and, in the case of PIVC, much debate has been directed towards aseptic insertion
and appropriate dwell times, but little attention has been directed to whether the PIVC is required.2

The concept of the “idle IV” catheter was introduced over 20 years ago,3 yet at our tertiary hospital emergency department (ED), we found that half of PIVCs inserted were unused. In 43% of patients admitted to the hospital, the PIVC remained unused at 72 hours.4 Patients presenting with obstetric, gynaecological and neurological symptoms were significantly more likely to have an unused PIVC.4

Focus group testing revealed that it had become a culture within our ED to insert a PIVC in most patients “just in case”. We have started a program encouraging staff to think before they insert an IV catheter, and to “just say no to the ‘just in case’ PIVC”. Our study did not examine harm caused by unused PIVCs, but the potential benefits of such a program include reduction in infections, phlebitis, patient discomfort and financial cost (24 cents for venepuncture compared with $4.37 for PIVC equipment).1

To address the potentially preventable, iatrogenic complication of infected PIVCs, a sometimes-neglected point of intervention is for clinicians to decide whether the PIVC is actually required in the first place.

Epidemiology of tuberculosis and levels of vitamin D in Australia: person, place and time

To the Editor: We recently reported the impact of latitude on seasonality of tuberculosis in Australia, with greater cyclic variation in southern parts of the continent.1 We hypothesise that this seasonality is partly determined by differences in ultraviolet radiation exposure and subsequent vitamin D synthesis.13

Vitamin D deficiency (serum
25-hydroxyvitamin D levels below 50 nmol/L)3 has become a significant public health concern in Australia. Australians with darker skin, including some migrants and Aboriginal and Torres Strait Islander people, are at particular risk of both vitamin D deficiency3,4 and tuberculosis.1,5

Similar to our findings regarding tuberculosis incidence,1 a recent Australian study found that vitamin D deficiency was most prevalent in spring and that risk was highest for residents in major cities, people from socioeconomically disadvantaged areas, and those aged 20–39 and
80 years.2 These factors also apply to tuberculosis, including the age distribution (Box).

The correlations between seasonal variations in, and risk factors for, vitamin D deficiency and tuberculosis in Australia reinforce the ecological association between these conditions. Such associations cannot determine causality; but their consistency argues that guidelines should consider the potential impact of vitamin D deficiency on people at greatest risk
of tuberculosis.1 The increased risk of tuberculosis conferred by vitamin D receptor polymorphisms supports a causal role for vitamin D deficiency in active tuberculosis.6

Despite increasing observational data regarding vitamin D deficiency and risk of tuberculosis, evidence supporting vitamin D supplementa-tion to reduce this risk is lacking.6 However, given the broader potential health impacts of vitamin D deficiency in high-risk populations, we support recent calls to increase vitamin D testing in these groups and to promote supplementation for those at greatest risk of both tuberculosis and vitamin D deficiency, including migrants with darker skin.3

Serum 25-hydroxyvitamin D levels of at least 50 nmol/L at the end of winter have been recommended for optimal bone and muscle function,3 with supplementation continued into spring to avert depletion.2 This seasonal focus on ensuring adequate vitamin D levels in people at risk of both vitamin D deficiency and tuberculosis could also reduce the seasonal peak of disease each spring, particularly in the southern states of Australia.1

Age distribution of tuberculosis (TB) notifications* and vitamin D levels in Australia

25-OHD = 25-hydroxyvitamin D. * Source: National Notifiable Diseases Surveillance System. Source: Boyages and Bilinski.2

A budding surprise from the joint

Reactivation of dormant infections is increasingly recognised with immunosuppression for rheumatic diseases. Septic arthritis from dimorphic fungi is exceedingly rare in non-endemic settings. We describe the first Australian case of Histoplasma capsulatum septic arthritis in a man of Laotian descent who was receiving treatment for seropositive rheumatoid arthritis.

Clinical record

A 78-year-old man of Laotian descent but resident in Australia for many years was referred in October 2010 with bilateral wrist extensor tenosynovitis, and a diagnosis of seropositive rheumatoid arthritis was made (cyclic citrullinated peptide antibody level, > 200 U/mL [reference interval, < 5 U/mL]; rheumatoid-factor negative). Although oral methotrexate initially improved disease control, he later had progressive disease involving his knees and ankles. Prednisone (5–15 mg daily) and leflunomide (10 mg daily) were added without much improvement, and he was considered for further immune modulation therapy. The patient’s background included tuberculous cervical lymphadenitis in 2009 (positive result of culture for Mycobacterium tuberculosis), treated with 12 months of standard quadruple therapy.

In March 2012, the patient noticed worsening right ankle pain with synovitis, and a joint aspirate showed inflammatory synovial fluid (white cell count, 5100 × 106/L) with negative results of bacterial cultures. Intra-articular corticosteroids provided symptomatic relief; however, the right ankle worsened in July 2012, and this time a repeat intra-articular corticosteroid injection failed to improve his symptoms. Due to ongoing synovitis in other joints and raised inflammatory markers, he was given a single 8 mg/kg dose of intravenous tocilizumab (a humanised, monoclonal, interleukin-6 receptor antibody). A repeat ankle aspiration and corticosteroid injection was performed concurrently. One week after the tocilizumab infusion, the right ankle synovitis became more prominent, and Histoplasma capsulatum was isolated from the most recent ankle aspirate. The diagnosis was confirmed by morphology of the yeast forms (Box, A), thermal dimorphism (mould phase converting to yeast form) (Box, B) and a panfungal polymerase chain reaction, which showed the same organism. He was systemically well with no fevers, sweats or weight loss. C-reactive protein was mildly raised at 10.4 mg/L (reference interval, < 5 mg/L). Arthrotomy with debridement and lavage of the ankle joint was performed, and surgical specimens also showed H. capsulatum. Computed tomography scans of his brain, chest and abdomen did not show any evidence of disseminated disease. Further history showed he had frequently travelled to rural Laotian communities as a tourist before his diagnosis of rheumatoid arthritis in 2010. After 2 weeks of induction therapy with intravenous liposomal amphotericin, treatment was maintained with oral itraconazole at therapeutic levels. Immunosuppression was minimised with cessation of methotrexate, leflunomide and tocilizumab, and gradual withdrawal of all corticosteroid treatment. He has made a steady recovery with no flare of his inflammatory arthritis to date.

Discussion

We believe this is the first report of H. capsulatum septic arthritis in Australia. H. capsulatum is a thermally dimorphic soil fungus, growing as a mould in the environment and as yeast at body temperature.1 It is endemic to North America, parts of Europe and South-East Asia (where our patient was born). Infection occurs with inhalation of spores aerosolised during activities that disturb soil. Initial infection is transient and asymptomatic in most of the immunocompetent population, with fewer than 5% of people estimated to be symptomatic.2 Macrophages play a key role in clearing histoplasma infection, and this is why disseminated disease occurs mostly in immunocompromised patients. Disseminated histoplasmosis has increasingly been reported in endemic areas after use of tumour necrosis factor-alpha (TNF-α) inhibitors for rheumatoid arthritis.3 Histoplasmosis is three times more common than tuberculosis as a cause of serious infection in patients receiving TNF-α blockers.4 While a sterile immune-mediated polyarthritis is well recognised to occur in conjunction with symptomatic pulmonary histoplasmosis,5 localised septic arthritis is exceedingly rare, with only seven case reports in the literature.6

Tocilizumab has not been reported in association with histoplasmosis, and is unlikely to have been the cause in this case, as the diagnosis of septic arthritis was made at the same time as tocilizumab was administered. However, the additional immune suppression may have worsened the patient’s symptoms.

A monoarthritis in a patient with a known diagnosis of an inflammatory polyarthritis such as rheumatoid arthritis is septic arthritis until proven otherwise. Atypical infectious organisms should be considered as a possibility in patients who are immunosuppressed, including those treated with monoclonal antibodies. Dormant infectious organisms can reactivate in the setting of immunosuppression and can cause septic arthritis even in areas of low endemicity.

Gram stain appearance of the joint aspirate showing both intracellular and extracellular yeast forms of Histoplasma capsulatum; and B: fungal culture slope (potato dextrose agar) showing the mould phase of H. capsulatum at 28oC

Take your time

Without the accoutrements of fame, American actress Angelina Jolie would be just one of many young women who are using genetic knowledge to manage their breast and ovarian cancer risk. However, with her unavoidable celebrity, Jolie’s explanation of her preventive double mastectomy, and possible later oophorectomy, to reduce the risk associated with a BRCA1 mutation (http://www.nytimes.com/2013/05/14/opinion/my-medical-choice.html) has again brought breast and ovarian cancer to public attention.

In its essentials, Jolie’s story is an exercise in the clinical management of genetic information to better one’s future health. In such situations, both doctors and patients need time (an often scarce commodity) to negotiate the complex pathway from knowledge to clinical action, as our understanding of diseases and their associations becomes more intricate, but is still incomplete.

Despite the cultural power of a famous actress’s real-life story in creating positive effects on health behaviour in society, the complexities of clinical interpretation and practice — and how they affect patients’ decisions — can be unintentionally sidelined in public discussion. Following the surge of media interest in Jolie’s announcement, referrals to two familial cancer centres in Victoria almost immediately doubled, according to James and colleagues (doi: 10.5694/mja13.11218). Many of these people had family histories suggesting carriage of a relevant mutation, and among them were probably people at risk who may not otherwise have presented for genetic testing and counselling. Some good may indeed have come from the burst of publicity. But the complex discussion and decision making involved — requiring a concurrent understanding of disease risk, genetics and oncogenesis, and its nuanced application to an individual’s circumstances — likely caught many of those presenting to the clinics off guard. Among women with known BRCA1 and BRCA2 mutations, there is currently quite low uptake of preventive options, as research presented by Collins and colleagues (doi: 10.5694/mja13.10848) shows. They propose reasons for this, but it is uncertain what proportion of women not undertaking preventive measures make a fully informed decision not to act, and what proportion are not treated because of an unintended gap in care.

With continuing advances in the field, it is timely to discuss the current application of germline genetics to cancer more generally. Winship and Tucker (doi: 10.5694/mja13.10978) provide an overview of our genetic knowledge about many cancers, and its interpretation and application to clinical decisions, which is now mature enough to be part of routine care. Informed patient counselling requires significant investment of time and effort. Current and future developments, especially in genomics and next-generation genetic sequencing, bring ethical and social challenges as well as clinical ones. The old idea that genes would provide clear answers has certainly gone.

Other clinical problems also demonstrate the intersection of incomplete knowledge, problems in diagnostic capability and interpretation, and imperatives to act on the information we have. In a letter to the Editor, MacLachlan and Cowie (doi: 10.5694/mja13.10478) propose that low vitamin D levels increase the likelihood of reactivation of tuberculosis (TB), citing the coincident seasonality of active TB cases and vitamin D deficiency. They advocate vitamin D testing and supplementation in groups at high risk. In an editorial, Truswell (doi: 10.5694/mja13.11121) outlines plausible physiological reasons for this observation, which may explain the use of sunshine and cod liver oil for treating patients with TB in the sanatoria of old. But, as Ralph and Lucas argue (doi: 10.5694/mja13.11174), many questions remain unanswered about accurate vitamin D testing and interpretation, the benefits of supplementation, and potential harms of oversupplementation. Should we wait for a large-scale randomised controlled trial examining the effects on TB of treating vitamin D deficiency to make a public health recommendation? Or can we act on less definitive evidence and, if so, what level of evidence should that be?

Proper planning for a “good death” for those with increasingly debilitating chronic illness needs to be calm, careful and mindful of the patient’s relationships, values and specific wishes, and not devised “on the fly” in a health crisis. Sadly, the reality is that in many cases timely planning does not take place. Scott and colleagues outline the many positive clinical and psychological benefits of advance care planning, and ways to overcome obstacles (doi: 10.5694/mja13.10158). Clinicians should be given proper opportunity to develop advance care plans with patients; even a little more time out of a busy schedule would go a long way. For the community to accept that everyone should allow for such planning as an essential part of their later years, perhaps we now need celebrities to publicly and articulately talk about their own advance care plans. Ultimately, we all need to realise the supreme importance of time, not only for advance care planning, but also for wellbeing — time to discover, time to think, time to talk, time to act.

Risk of measles transmission on aeroplanes: Australian experience 2007–2011

To the Editor: We thank Hoad and colleagues for presenting data from Australian states and territories regarding in-flight exposure to measles.1

The current cases in Queensland, Victoria and New South Wales, which have included domestic air travel,2 highlight the challenges measles still holds for Australia. Hoad et al’s report outlines the substantial public health response that such air travel events generate, despite the low likelihood of being able to use postexposure prophylaxis (PEP), measles–mumps–rubella vaccine or immunoglobulin in exposed contacts. They suggest that direct email or text messaging of passengers should be considered as an alternative to routine contact tracing, but identify that contact information is not always available
to health departments.

Given this, we recommend that airlines be made responsible for alerting passengers in the event of an urgent public health concern, such as potential exposure to measles.

This would require airlines to keep travel manifests for at least 14 days after a flight. Potentially exposed passengers could be directed to a health department website providing information about PEP options, use and access — for example, through their general practitioner — as well as self-monitoring for measles symptoms, and, should symptoms occur, social distancing measures.

Airlines already use social media and email to communicate with customers, and, we believe, have
a duty of care to passengers that extends beyond safe arrival at
their travel destination. Measles
is a serious illness that leads to hospitalisation of around one-third of affected young adults in Australian outbreaks.36 Making airlines, as the primary holders of travellers’ details, responsible for initial contact would minimise delays and communication errors and maximise the likelihood of preventing further generations of infection in the wider community.

A description of human hydatid disease in Tasmania in the post-eradication era

Human hydatid disease, or echinococcosis, is a helminthic infection that leads to the formation of fluid-filled cysts in the liver, lungs and other organs. Echinococcus granulosus, which causes cystic echinococcosis, or unilocular cyst disease of viscera,1 is the only member of the genus Echinococcus to be found in Australia. It was introduced into Australia during the early period of European settlement and had been described in domestic animals before 1840.2

E. granulosus is a cyclozoonosis, requiring at least two species of vertebrates as definitive and intermediate hosts (Box 1). Dogs and other canids such as dingos and foxes are definitive hosts and infection occurs following ingestion of metacestodes (cysts) in mammalian organs, leading to the shedding of infective eggs containing larval oncospheres in the faeces. The intermediate host is infected following ingestion of infective eggs. The intermediate host range is broad and regionally specific.3 It includes domestic and feral ungulates such as sheep, goats, pigs, camels and buffaloes, and marsupials such as kangaroos and wallabies.4 This broad host range has resulted in the establishment of domestic and sylvatic cycles, which in some regions may intersect.

Humans are infected as intermediate hosts. Following egg ingestion, the oncosphere is absorbed through the intestinal wall and deposited via the circulatory system to various organs, with subsequent cyst formation. The most common sites of human disease are the liver (> 65%) and lungs (20%).3,5 Other less commonly affected areas include other intra-abdominal sites such as kidney, spleen, peritoneal cavity and, more rarely, spine, brain, heart and bone.3 Infection in some sites, particularly the liver, may remain asymptomatic for many years and is usually diagnosed well into adulthood, often incidentally. Conversely, disease in the central nervous system becomes symptomatic and presents much sooner after initial infection.

During the past century, Tasmania experienced one of the highest rates of human hydatid disease in the world,6,7 perpetuated by a hydatid life cycle involving dogs and sheep. Between 1957 and 1967, 28 fatal human cases were recorded.7 Estimates of human disease based on surgical cases in the 1950s and early 1960s ranged from 92.5 to 151 cases per 100 000 population per decade.8,9 At this time, the Australian annual rate was 1.6 per 100 000 human population nationally and 7.8 per 100 000 in rural areas.10 Surveys of slaughtered sheep in different Tasmanian regions in 1963 revealed a hydatid prevalence of 35%–73%,8 and surveys of rural dogs reported a prevalence of 12.7%.6

Increasing concern regarding the human health impact of hydatid disease led to the formation of the Tasmanian Hydatids Eradication Council in 1962.6 A systematic campaign commenced in 1965. This encompassed regular testing of dog faeces, anthelmintic treatment of dogs, examination of abattoir-slaughtered sheep, community education regarding safe slaughtering practices and farm hygiene, and the prohibition of feeding offal to dogs.6

The eradication campaign resulted in a rapid and significant reduction in the prevalence of hydatid infection among dogs and sheep,11 and the surgical incidence of hydatid disease among humans.12

Tasmania was declared provisionally hydatid-free in 1996. Since then, abattoir surveillance has continued but there has been no systematic review of human hydatid cases in the state. As new cases continue to be detected, we undertook a retrospective case review to determine the features of human hydatid disease in Tasmania after the provisional declaration of eradication and, in particular, to determine if there is evidence of acquisition of disease after the early 1970s.

Methods

From 30 July to 30 October 2012, systematic data collection was undertaken to identify cases of hydatid disease in patients who presented to medical practitioners in Tasmania from January 1996 to July 2012. Approval for the study was obtained from the Human Research Ethics Committee (Tasmania). Patients undergoing serological testing for hydatids were identified from the Royal Hobart Hospital Department of Pathology laboratory information system. Hospital admissions were identified from discharge coding data from all major Tasmanian public hospitals (Royal Hobart, Launceston General, Mersey Community and North West Regional) and the Royal Hobart Hospital Microbiology and Infectious Diseases Unit consultation database. Permission was obtained from the Director of Public Health to access the Department of Health and Human Services notifications for hydatid disease for the years 1996–2012.

Identified patients were contacted by telephone by the study coordinator to obtain consent to participate in the study. If verbal consent was given, participants were mailed a consent form to complete and return. After written consent was received, the participants were interviewed by telephone using a standardised question-naire, and hospital records were used to obtain additional information. Information for patients who had died or could not be contacted was obtained from the medical record.

Data variables collected included current age, sex, year of and age at initial diagnosis, symptoms at dianosis, site of disease, results of diagnostic imaging and serology (if performed), and date and type of surgical management. Participants were asked to estimate the likely region and time of acquisition.

A case definition of hydatid disease was developed. Hydatid disease was considered to be confirmed if a clinical diagnosis was made by the surgeon based on consistent intraoperative findings, if the diagnosis was made histologically, or if typical cystic lesions were present on imaging. Year of diagnosis was defined as the year of initial presentation with hydatid disease. Patients diagnosed before 1996 were included in the analysis. Study participants in whom the diagnosis could not be confirmed were excluded from further analysis.

Results

Fifty-one patients with possible cases of hydatid infection were identified, of whom 41 fulfilled the case definition. Median age for patients was 71 years (range, 44–99 years). There were 21 women and 20 men. Patient demographics and clinical features are summarised in Box 2. Two patients were born after 1965: one in 1967 and one in 1968.

From January 1996 to July 2012, 25 patients were diagnosed with hydatid disease, with an average rate of 1.3 cases per year (range, 0–3 cases per year) (Box 3). Of the 25 patients, 10 (40%) had been notified to the Department of Health and Human Services.

Among these, there were no cases in children, no cases of extra-abdominal disease and only four cases of extrahepatic disease (Box 2).

Assessment for attributable exposure was possible in 29 of the 41 patients who fulfilled the case definition and in 20 of the 25 patients diagnosed from 1996 to 2012. All 29 patients could identify a period during which they had regular close contact with sheep-farming areas or offal-fed dogs and subsequent high-risk exposure for hydatid acquisition. Of these, 26 patients could identify such high-risk exposure before 1965. Of the remaining three patients, two described significant exposure to sheep farms in the late 1960s to early 1970s. The remaining patient could only recall significant exposure from the late 1970s.

Twenty-four patients were managed surgically: eight of the 17 patients diagnosed before 1996 and 16 of the 25 patients diagnosed from 1996 to 2012. Two patients underwent cyst aspiration. Three further patients were referred for, but did not undergo surgical management due to high operative risks. Nine patients did not receive any therapy, as their disease was considered to be inactive. Management was unknown in four patients.

Twenty-three patients received anthelmintic therapy. All of these patients were treated with albendazole, and an additional five patients received praziquantel during the perioperative period. More patients diagnosed between 1996 and 2012 received albendazole (15 of 24, 62.5%) compared with eight of 17 patients (47%) diagnosed before 1996.

Outcome could be determined in 34 patients. Two patients had died from complications of their hydatid disease (one from a ruptured pulmonary cyst and one from postoperative infection) and seven had died from other causes. Twenty-five patients were alive, and 10 of these continued to have their hydatid disease monitored. Two of these remain on lifelong albendazole.

Discussion

Our study demonstrates that hydatid disease persists in Tasmania, albeit at a low rate. Currently, just over one new case is diagnosed per year. It is expected that this rate will remain stable for the next two to three decades, as patients present with symptoms related to their cysts, or as cysts are found incidentally during investigations for unrelated disorders.

Our study has found no evidence of hydatid transmission in Tasmania between 1996, the year that the state was provisionally declared hydatid-free, and 2012. A preponderance of hepatic disease and paucity of childhood and extrahepatic disease, as found in this study, is observed in regions where local transmission has ceased.6,13

It has previously been estimated that human hydatid transmission had ceased in Tasmania by 1974, possibly by as early as 1970.6,14 This conclusion was based on data that demonstrated the absence of hydatid disease in children born shortly after the program was introduced. The trend of increased median age at diagnosis since 1965 among study subjects (Box 2) and the fact that no subject was born after 1968 support the suggestion that human hydatid acquisition ceased well before 1996. All but one of the 29 patients assessed for time of hydatid acquisition described risk factors before the mid 1970s. The remaining patient moved to a sheep-farming region in the late 1970s, at which time the prevalence across the state in rural dogs was estimated at 0.2%.11

Management of hydatid disease remains primarily surgical, with adjuvant anthelmintic therapy. Surgical rates were slightly higher in patients diagnosed between 1996 and 2012; however, this may be due to incomplete data collection rather than a true change in practice. Puncture, aspiration, injection and reaspiration therapy was only employed for two patients. This may reflect a lack of expertise with this technique within Tasmania or that patients are presenting with complex disease in which the therapy would not be appropriate.

E. granulosus infection continues to occur widely in mainland Australia, predominantly in cooler regions with high rainfall, and it is estimated that 80–100 patients are diagnosed with hydatid infection per year in Australia.15 A survey of New South Wales and the Australian Capital Territory from 1987 to 1992 found the highest prevalence in the shire population of north-east and south-east NSW, with a mean annual prevalence of up to 23.5 cases per 100 000 population,15 a rate comparable to that of rural Tasmania before eradication. The sylvatic life cycle established in many mainland regions will be a significant obstacle to eradication from mainland Australia.2,1518

Estimating the true prevalence of human hydatid disease in Australia is hampered by the poor quality of notification data, which has been repeatedly demonstrated to underestimate the true incidence of the disease.15 Our study found a notification rate of just 40%. Reasons for such poor notification rates are likely to be multifactorial. However, a lack of awareness of the requirement to notify cases and the perception that human hydatid disease is no longer a disease of public health importance in Tasmania are likely to be contributing factors.

The threat of recurrence in Tasmania remains. A new focus of infection was identified in 1988 on King Island, north of Tasmania, where hydatids had not been detected since 1971.19 The source of the outbreak was never identified, and typing of the isolates found them to be genetically distinct from Tasmanian and mainland Australian strains. In 1997, a single sheep and cow were found to have hydatid cysts.9 The source was traced back to a dog that had been introduced from mainland Australia. Since then it has been mandatory to dose any dog being brought to Tasmania with praziquantel before arrival. It remains illegal to feed offal to dogs.

Our retrospective study has limitations due to incomplete patient information and the use of public hospital-based case ascertainment. As a result, we will have failed to capture patients managed in private hospitals or conservatively by general practitioners and surgeons. Nonetheless, the study confirms that human hydatid disease continues to be frequently seen in Tasmania in patients exposed before and after eradication. Most significantly, it demonstrates the likely success of the Tasmanian Hydatids Eradication Council in eliminating transmission of hydatid disease within Tasmania.

1 Life cycle of Echinococcus granulosus

Source: DPDx Laboratory Identification of Parasites of Public Health Concern. Echinococcosis [internet image library]. Atlanta: Division of Parasitic Diseases and Malaria, Centers for Disease Control and Prevention. http://dpd.cdc.gov/dpdx/HTML/ImageLibrary/Echinococcosis_il.htm (accessed Feb 2013). The adult worm resides in the small intestine of the definitive host (1). It releases eggs into the environment (2), which are then ingested by an intermediate host. In the small intestine of the intermediate host, the oncosphere hatches, penetrates the intestinal wall (3) to enter the bloodstream, whereby it migrates and is deposited in other organs (4) and forms a cyst. The cyst produces protoscoleces (5). The cycle is complete when the intermediate host is eaten by a definitive host and ingests the protoscoleces, which go on to develop into the adult worm.

2 Patient demographics and echinococcosis infection site details, by years of diagnosis

Years of
diagnosis

No. of
patients

Median age
at diagnosis
(range), years

Male:female ratio

Sites of infection (n)

Region of acquisition (n)


Before eradication program (1943–1965)

1

25

1:0

Liver

Unknown

1966–1975

4

24 (9–41)

3:1

Spine (1); liver (2); liver, lung and abdominal cavity (1)

East coast (1); rural: not specified (1); unknown (2)

1976–1985

2

33 (32–34)

0:2

Spine (1); liver (1)

Derwent Valley(1); rural: not specified (1)

1986–1995

9

60 (44–75)

5:4

Liver (8); liver and lung (1)

Derwent Valley (1); central (1); south (1); overseas (1); rural: not specified (1); unknown (4)

1996–2005

14

60 (36–87)

1:1

Liver (11); liver and pelvic cavity (2); spleen (1)

Derwent Valley (4); central (1); east
coast (1); south (3); north west (1);
rural: not specified (1); unknown (3)

2006–2012

11

66 (43–84)

4:7

Liver (10); liver, spleen and abdominal cavity (1)

Derwent Valley (1); south (4); east
coast (1); north west (1); overseas (1); central (1); unknown (2)

3 Number of new echinococcosis diagnoses per year in Tasmania, 1967–2012

Data for diagnoses from 1967 (2 years after introduction of the eradication program) to 1985 sourced from McConnell11 and Tasmanian Hydatid Disease Newsletter 1971; 3(31) (Tasmanian Hydatids Eradication Council). Data for diagnoses from 1996 onward collected in this study.

Outcomes for Indigenous and non-Indigenous patients who access treatment for hepatitis C in the Top End of the Northern Territory

To the Editor: Chronic hepatitis C virus (HCV) infection affects over 225 000 Australians1 and is a leading cause of the need for liver transplantation and of liver-related death, but curative treatments are available. Ethnicity is a major determinant of treatment responsiveness, with the lowest sustained virological response (SVR) rates reported in African patients, and the highest in Asian patients.2 Much of this difference is accounted for by racial differences in polymorphisms in the interleukin-28B (IL28B) gene;3 however, it is unknown how common these polymorphisms are in Indigenous Australians, and no studies have been published about hepatitis C treatment outcomes among Indigenous Australians.

The hepatitis C treatment service for the Top End of the Northern Territory is run from a community-based sexual health clinic in Darwin. As clinicians working at this service, our perception was that Indigenous people rarely accessed the service or received treatment for HCV infection. Further, we were concerned that — due to social, cultural and linguistic barriers — Indigenous people who accessed the service may be less likely to commence treatment and to successfully complete treatment and achieve an SVR. Following ethics approval from the Human Research Ethics Committee of the Northern Territory Department of Health and Families, we performed a retrospective case-note audit to determine the number of Indigenous people accessing the hepatitis C treatment service and their characteristics and treatment outcomes.

During the period 1 January 2006 to 31 December 2010, 243 patients were seen on at least two occasions for assessment of HCV infection; all were adults and 22 (9%) were Indigenous. During the audit period, HCV infection was treated with pegylated interferon-α plus ribavirin for 24–48 weeks. There were no significant differences in the proportion of patients who went on to commence and complete treatment, and to achieve an SVR, between Indigenous and non-Indigenous patients (Box). Of five Indigenous patients tested for IL28B genotype, all had the favourable CC polymorphism at the rs12979860 locus. Compared with the unfavourable TT and CT polymorphisms, the CC polymorphism at this locus is associated with at least a twofold higher chance of achieving a cure of HCV with interferon treatment, due to enhanced host immune responsiveness to interferon.3

In conclusion, Indigenous people in the NT who access hepatitis C treatment services have a similar chance of achieving a cure (SVR) to non-Indigenous people. This may be partly because they are likely to carry the favourable CC polymorphism at the IL28B gene.

Indigenous compared with non-Indigenous patients who attended a hepatitis C treatment service on at least two occasions for assessment of HCV infection*

Indigenous (n = 22)

Non-Indigenous (n = 221)

P


Age, median (interquartile range)

41.0 (36.2–45.0)

47.3 (39.3–52.1)

0.14

Men

15/22 (68% [45%–86%])

144/221 (65% [58%–71%])

0.78

HCV genotype 1

10/18 (56% [31%–78%])

87/173 (50% [43%–58%])

0.67

Commenced HCV treatment

11/22 (50% [28%–72%])

99/221 (45% [38%–52%])

0.58

Completed HCV treatment

9/11 (82% [48%–98%])

80/99 (81% [72%–88%])

0.94

Achieved sustained virological response

4/8 (50% [16%–84%])

54/88 (61% [50%–72%])

0.53


HCV = hepatitis C virus. * Data are number/denominator (% [95% CI]) unless otherwise indicated. Denominators represent those patients for whom 6-month post-treatment blood test results were available.

Locally acquired severe non-O1 and non-O139 Vibrio cholerae infection associated with ingestion of imported seafood

To the Editor: We report a case
of severe Vibrio cholerae infection acquired in Sydney, likely due to ingestion of imported seafood.

An 83-year-old man with Parkinson disease presented with
a 3-day history of vomiting, large-volume watery diarrhoea and acute renal impairment necessitating admission to the intensive care unit. Blood cultures grew curved gram-negative bacilli, and intravenous piperacillin–clavulanic acid was commenced. Subsequent microbiological testing of blood isolated V. cholerae, prompting testing and confirmation of V. cholerae in stool cultures. The strain was identified as non-O1 and non-O139 by serotyping, and toxin gene-negative by polymerase chain reaction testing.

The patient subsequently reported ingestion of imported seafood (a marinara mix containing mussels from Chile, prawns from Vietnam and squid from China), purchased from a local supermarket, although none of the suspected food was available for testing. He had no recent travel history or exposure to marine or brackish-water environments and no unwell contacts.

Antimicrobial therapy was changed to ciprofloxacin. The patient’s recovery was complicated by caecal pseudo-obstruction requiring endoscopic decompression. He was discharged after 2 weeks of antibiotic treatment.

Although rare, sporadic cases
of both epidemic (O1 and O139 serotypes) and non-epidemic
(non-O1 and non-O139 serotypes)
V. cholerae
infection have been reported in Australia.1 Australian cases have been linked to ingestion
of imported seafood, with a notable outbreak in Sydney associated with imported whitebait.2 V. cholerae is known to be present in Australian estuaries, and some endemic cases have been associated with local aquatic exposure.3

While only O1 and O139 isolates are mandated for reporting to Australian public health units, non-epidemic strains of V. cholerae are associated with bacteraemia and a poor prognosis.4 Clinically suspected cases of V. cholerae infection should
be reported to public health units, pending microbiological confirmation. As identification of Vibrio species is not routinely done on stool cultures, a suspicion of Vibrio infection must be communicated to laboratories.

Under current Australian law, only imported cooked prawns are required to be tested for Vibrio species contamination, and there is no restriction on the geographical origin of seafood imported into Australia.5,6 This case highlights an ongoing risk of potentially severe V. cholerae infection from imported seafood, and it should be considered as a differential diagnosis for patients presenting with severe enteritis and a compatible exposure history.