×

Suicide prevention: signposts for a new approach

Suicide prevention can be improved by implementing effective interventions, optimising public health strategies and prioritising innovation

Suicide has overtaken motor vehicle accidents as the leading cause of death among young adults aged 15–44 years in Australia. In 2011, 410 Australians aged 25–34 years took their own lives, with a total of 2273 deaths from suicide reported across all age groups.1 In terms of funding allocations, the Australian Government’s investment in the National Suicide Prevention Program (NSPP) more than doubled from $8.6 million in the financial year 2005–06 to $23.8 million in the financial year 2010–11.2 However, it is uncertain whether specific activities funded under this and similar schemes have reduced suicide rates. One study reported that Australia’s efforts to improve youth suicide prevention through locally targeted suicide prevention activities under the National Youth Suicide Prevention Strategy were unsuccessful in the period 1995–2002.3 Recent studies highlighting the limitations of individual risk assessments have contributed to a sense of nihilism. In suicide prevention, there is an acute mismatch between evidence-based interventions and clinical and population-based practice. The evidence of effectiveness is very limited,4 while the need to act is compelling.

Given this picture, a new approach must be considered — one that optimises implementing the few public health interventions that are backed by strong research evidence, as well as testing innovative strategies. The following six recommendations may help focus a new suicide prevention policy.

Recommendation 1: implement known effective interventions

A first step in reducing suicide rates is to implement interventions that are known to work. The three public health interventions with the strongest evidence base in reducing suicide are gatekeeper training, reduction in access to means, and good-quality effective mental health care.4 Gatekeeper training involves teaching individuals such as health care professionals, army and air force officers, school staff and youth workers, who are primary points of contact for high-risk populations, to effectively identify, assess and manage risk of suicidality and provide referral to treatment if necessary. Reduction in access to means of suicide includes increased restriction of access to firearms, domestic gas and pesticides, reduced pack size of analgesics and physical barriers at suicide sites. Good-quality mental health care, such as training for general practitioners to identify depression, combined with collaborative care, quality assurance programs and nurse management, is effective in reducing depression. A descriptive, cross-sectional before-and-after analysis of national United Kingdom suicide data from 1997 to 2006 provided evidence supporting the utility of combining various prevention strategies within mental health services.5 In 2005, for example, services that implemented seven to nine out of a total of nine recommendations had suicide rates of 10.50 per 10 000 patients, while services that implemented zero to six recommendations had rates of 13.45 per 10 000 patients.

However, even as a first step in reducing suicide in Australia, the value of these strategies is limited. Most evaluations of community gatekeeper programs report improved knowledge about suicide and increased self-efficacy in gatekeepers (ie, gatekeeper trainees’ self-reported perceptions and appraisal of their own ability, competence and skills to successfully identify and assess suicidal risk and refer to appropriate services if necessary). However, gatekeeper training has established effectiveness for suicidal ideation or suicide attempts only in certain medical or institutional contexts, such primary care or the military.4 Community gatekeeper training, which is currently funded under the NSPP, has not been subject to rigorous empirical tests for core suicide outcomes. Means restriction is influential where access to suicide methods (such as pesticides) is prevalent. However, in Australia, efforts to restrict means are already in place. Improved mental health care will only be effective for those in contact with mental health services, estimated to be less than half of those who attempt suicide,6 and for only one-third (34.9%) of those with any mental disorder over a 12-month period.7 Health reform and investment to increase early access to headspace: the National Youth Mental Health Foundation and e-health services for young people may increase rates of help-seeking. However, effectiveness research is very limited — of the “effective” public health interventions for suicide described above, only gatekeeper training has been subjected to a single randomised controlled trial (RCT).4,8 In contrast, a 2012 paper reported more than 30 RCTs of non-pharmacological interventions to prevent depression,9 and a 2005 retrospective evaluation reviewed 477 RCTs of selective serotonin reuptake inhibitors to explore whether antidepressants increased risk of suicide.10

Recommendation 2: model for best
“bang for buck”

To gain maximum benefit from the available suicide prevention funds, we need to determine the impact, circumstances and audience of targeted or universal population-based approaches. Targeted approaches aim to lower risk in groups with higher risk of suicide, such as Indigenous youth, lesbian, gay, bisexual and transgender people, and older men,1 or those with higher risk of suicide attempts, such as young women. A broader population-based approach aims to lower the overall level of risk factors and behaviours in the population, thereby reducing the number in the “high risk” tail of the distribution.11 Modelling is required to determine the extent to which targeted or population approaches will deliver the best “bang for buck” in reducing suicide attempts, risk and burden on the health system, and in facilitating the uptake of specific prevention activities and health services. The economic costs of suicide need to be assessed more broadly.

Recommendation 3: evaluate if simpler interventions are as effective as more
complex ones

Internationally, the trend is to combine multiple elements into broader programs, such as the European Alliance Against Depression (EAAD), which involves 20 international partners representing 18 European countries, Optimizing Suicide Prevention Programs and their Implementation in Europe (OSPI Europe)12 and the “Don’t hide it. Talk about it” campaign, undertaken in conjunction with the Choose Life training program in Scotland. For most of these programs, we are unable to determine whether a single element, a combination of elements or the sheer intensity of the cumulative effect of the approach is the key to any potential impact. The downside of complex interventions is that costs rise and translation to practice requires intense effort, so there is urgency about evaluating each of the elements separately.

Recommendation 4: take advantage of opportunities early in the suicide prevention chain

Risk models indicate that suicide risk arises from depression, hopelessness and capability which, in combination with proximal and immediate triggers, lead to suicidal acts. Systematic intervention early in this “chain” is important. If depression is a necessary (albeit not sufficient) condition and prominent risk factor for suicide, intervening early for depression is critical. From a population perspective, schools are an ideal environment in which to deliver interventions that may lower the risk of suicide later. Australian researchers have shown that “upstream” modification of depression and alcohol misuse is achievable.13 However, these upstream interventions are not systematically implemented. Postvention programs have been newly introduced into high schools to deal with the fallout of an attempted suicide or a suicide, without support from RCTs. These may be useful, although this remains to be seen. The point is that our strategy needs to put more emphasis on prevention in those with risk, where evidence is relatively strong. Put simply, the hospital emergency department should not be the first point of intervention in the suicide prevention chain.

Recommendation 5: offer suicide programs directly through the internet to those at risk and not in contact with mental health services

There is promising evidence that online programs are effective and able to reach many who do not seek traditional health services.14

Recommendation 6: develop clear prevention messages and practices to improve suicide literacy

Media guidelines promote responsible professional coverage and caution against the possibility of social contagion. This social transmission of suicidal behaviour through social media needs immediate attention, particularly in young people, given the potential for harm. However, there is recognition that the issue of suicide must be discussed to improve understanding and, hopefully, to lower risk. The National Mental Health Commission recently commissioned research to explore Australians’ attitudes to suicide. The report concluded that since “simple advice can help stem the tide of some diseases”, a public campaign around suicide was warranted.15 Recent Australian research uncovered similar findings. Literacy levels around suicide are low, and people do not know what constitutes the triggers to suicide, or how to identify suicide risk in their friends and family.16

There may be overall harm in shutting down talk about suicide if this strategy inhibits a more integrated community and medical response to identifying those at risk. The information needs of the community need to be mapped out, and tailored messages should be trialled through community and expert consultation. We reiterate that before a campaign around suicide literacy and stigma is launched, the proposed campaign messages and dissemination practices should be tested using controlled experiments to determine if they raise appropriate awareness.

A new suicide prevention strategy

Suicide is a complex behaviour, and likely to have different causes and triggers depending on context and individual characteristics (eg, Indigenous and remote communities, culturally and linguistically diverse groups, people in prisons and those with a psychiatric disorder). However, suicide rates will not lower substantially if we continue a scattergun approach to funding diverse projects, failing to prioritise interventions with proven effectiveness, ignoring the opportunity to optimise a broader population health approach, or failing to fund innovation using new technologies. We must invest in new strategies with demonstrated impact to avoid further loss of life.

The extent of alcohol advertising in Australia: an audit of bus stop advertisements

To the Editor: There is significant concern about drinking patterns and alcohol-related harm among young people. A comprehensive approach to preventing harm from alcohol is needed, with population approaches including curbs on alcohol promotion.1 The National Preventative Health Taskforce has recommended phasing out alcohol promotions “from times and placements which have high exposure to young people”.1

Exposure to alcohol advertising influences young people; research consistently shows strong associations between exposure to alcohol advertising and young people’s early initiation to alcohol, and increased consumption if they already use alcohol.2 Concerns about the ability of Australia’s system of self-regulation to prevent young people’s exposure to alcohol promotions are well documented.3,4

The entire community is exposed to outdoor advertising; it dominates public spaces and is visible throughout the day. Advertisements on public transport and at transit stops are a common form of outdoor alcohol promotion. Young people are more likely than older people to use public transport5 and therefore are expected to be exposed to promotions placed in connection with bus stops.

To provide a snapshot of the volume of alcohol advertising on bus stop hoardings in an Australian capital city, we audited bus stop advertisements in Perth, Western Australia. The auditors (one of us [H L P] and a research assistant) followed a predetermined route within a 15 km radius of the Perth central business district on 6 December 2012 and 5 February 2013 (a total of 144 km). The product type on each advertisement was recorded.

Over the two audit sessions, 172 of 744 bus stop advertisements identified were for alcohol products (23.1%). In each audit session, alcohol was the dominant product category. In the alcohol category, there were 74 advertisements (43.0%) for beer, 70 advertisements (40.7%) for wine products, 27 advertisements (15.7%) for spirits and ready-to-drink products and one advertisement (0.6%) for cider.

These results provide further evidence that self-regulation is failing to prevent exposure of children and young people to alcohol advertising.3,4 Legislated curbs on alcohol advertising which effectively prevent such exposure are urgently required as part of a comprehensive approach to preventing alcohol-related harm.

Better prepared next time: considering nutrition in an emergency response

To the Editor: Cyclones, floods and bushfires are experienced in Australia every year, and Australia’s management of natural disasters centres on prevention, preparedness, response and recovery.1 Although access to safe food is a basic human need, during the 2010–2011 Queensland floods there was minimal information available to guide household food preparedness and food supply to communities.2 To ensure that Queensland is better prepared for future natural disasters, the Queensland Floods Commission of Inquiry recommended the development of consistent community education programs.2 Following the floods, a local food security resource kit3 was developed; however, there were no statewide resources. In 2011, we were members of a multidisciplinary working group — the Food Requirements in Disasters Working Group — that was established by Queensland Health to provide advice on food requirements in disasters for households and community organisations.

There is little international literature on food recommendations in disasters that is specific to high-income countries. Existing Australian resources did not consider nutritional requirements for infants, children and adults, did not provide sufficient advice for appropriate food purchasing in the event of no access to power or water and/or were no longer publicly available.4,5 Twenty-six principles and nutritional criteria (Box) — covering food safety, practical considerations and nutrient requirements — guided the development of recommendations on food requirements during disasters for infants, children and adults.

Five online fact sheets (available at http://www.health.qld.gov.au/disaster/html/prepare-event.asp) outlining the food and equipment required to sustain two people for 7 days (Emergency pantry list for Queensland households) and to support both breastfed and formula-fed infants for 3 days (including Food for infants in emergencies and Preparing ready-to-use infant formula in an emergency) were developed. The recommended types and quantities of foods align with the Australian dietary guidelines and Infant feeding guidelines (available at http://www.eatforhealth.gov.au). To facilitate purchasing choices, tips and examples of product sizes based on items available in major supermarkets are included.

Credible, easily accessible information is essential to ensure households have the capacity to prepare for and respond to disaster situations, to prevent panic buying and food shortages, and to minimise any negative impact on the health and wellbeing of individuals affected by disaster. Queenslanders now have access to a suite of resources to help them stay safe and healthy during natural disasters and severe weather conditions.

Principles and nutritional criteria used to guide recommendations on food requirements during disasters
for infants, children and adults

Principles

  • Nutrient requirements need to be balanced against practicality

  • Provision of adequate energy (kilojoules) and water are key priorities

  • Dietary recommendations set at population level — no individual dietary requirements

  • Requirements per person — should be scalable

  • Food products should be non-perishable

  • No refrigeration required

  • Minimal preparation required

  • No reheating or cooking involved

  • Number of days — should be scalable and informed by practical experience

  • Include generic products rather than specific brands

  • Total weight should be kept to a minimum

  • Foods should be safe

  • Packaging should be robust

  • Packaging should be waterproof and non-porous

  • Packaging should be vermin proof

  • Presume there are no facilities available for food storage — provide appropriate containers and serving sizes

  • Provide other equipment needed for preparation and consumption of food, including hand sanitiser, plastic cutlery
    and plates

  • Wastage should be minimised

  • Costs should be reasonable (no luxury items)

  • Foods should be palatable and acceptable

  • Foods should be readily available, familiar and culturally appropriate

  • Foods should be adaptable to personal tastes

Nutritional criteria

  • Provide mean food and nutrient requirements for adults and children

  • Provide mean food and nutrient requirements for infants
    (≤ 12 months)

  • Provide 100% of requirements (presume that households and isolated people have no other food available)

  • Particularly note upper limit for sodium

Improved iodine status in Tasmanian schoolchildren after fortification of bread: a recipe for national success

Iodine is an essential micronutrient required for thyroid hormone synthesis. Inadequate dietary iodine intake is associated with a spectrum of diseases termed iodine deficiency disorders. The most serious and overt consequences are neurocognitive disorders and endemic goitre.1 Urinary iodine excretion is a marker of recent dietary iodine intake and is typically used to monitor population iodine sufficiency. Population iodine status is considered optimal when median urinary iodine concentration (UIC) is between 100 µg/L and 199 µg/L, with no more than 20% of samples having UIC under 50 µg/L.1

Concern about the emergence of widespread mild iodine deficiency in Australia and New Zealand led to mandatory iodine fortification of yeast-leavened bread in 2009.2 Tasmania has a well documented history of endemic iodine deficiency, with iodine supplementation strategies implemented since the 1950s.3 The use of iodophors as sanitising agents in the dairy industry was thought to have provided protection; however, urinary iodine surveys of Tasmanian school children in 1998 and 2000 showed a recurrence of iodine deficiency.4

In October 2001, the Tasmanian Government introduced a state-based voluntary iodine fortification program as an interim measure to reduce the recurrence of iodine deficiency. This program resulted in a modest but significant improvement in population iodine status.5 The Tasmanian voluntary fortification experience provided valuable information for the development of the Australia and New Zealand mandatory iodine fortification program.

In this article, we describe the results of the 2011 urinary iodine survey of Tasmanian schoolchildren and compare these results to surveys conducted before fortification and during a period of voluntary fortification.

Methods

A cross-sectional urinary iodine survey of Tasmanian schoolchildren was conducted in 2011. Survey methods were comparable to those used during the period of voluntary fortification, as described elsewhere.5

A one-stage cluster sampling method was used to randomly select school classes that included fourth-grade students from all government, Catholic and independent schools in Tasmania (such classes may include children in third, fourth, fifth and sixth grade, as composite class structures are popular in Tasmania). A total of 52 classes (from 49 schools) were invited to participate. This included 42 classes that had been randomly selected for the final survey conducted during the period of voluntary fortification and an additional 10 classes randomly selected in 2011 to boost sample size. In total, 37 classes (from 35 schools) agreed to take part, representing a class participation rate of 71%. Of the 880 children in participating classes, 356 (40%) returned positive consent and 320 (36%) provided a urine sample for analysis. These participation rates are comparable with the rates reported from previous surveys.5

Spot urine samples were collected at home, returned to school and transported by a private pathology provider to a laboratory where they were frozen and stored. Batch analyses were completed by the Institute of Clinical Pathology and Medical Research, Westmead Hospital. UIC was measured using the ammonium persulfate digestion method based on the Sandell–Kolthoff reaction.6

UIC data from children of comparable age from prefortification surveys and from participants in the surveys from the voluntary fortification period were used for comparison with the data from this survey.

Data were analysed using Stata version 11 (StataCorp). Median UIC, interquartile range and the proportion of samples with UIC under 50 µg/L were calculated for each survey. To facilitate comparisons between medians and the proportion of UIC results under 50 µg/L across intervention periods (prefortification, voluntary fortification and mandatory fortification), data were combined from the two prefortification surveys (1998 and 2000) and from the four surveys conducted during the period of voluntary fortification (2003, 2004, 2005 and 2007). Differences in median UIC across intervention periods were compared using Kruskal–Wallis χ2 (corrected for ties) with post-hoc Wilcoxon rank-sum test.

Ethics approval was obtained from the Tasmanian Health and Medical Human Research Ethics Committee and the Department of Education Tasmania. Parent or carer consent was obtained for all participating children.

Results

Of the 320 students participating in the 2011 survey, 158 (49%) were boys, 153 (48%) were girls and nine (3%) were of unknown sex. Participants were aged 8–13 years, with 83% aged 9–10 years. The median UIC in 2011 was 129 µg/L, and 3.4% of samples had a UIC under 50 µg/L.

The median UIC in 2011 was significantly higher than during the period of voluntary fortification (129 µg/L v 108 µg/L; P < 0.001), which in turn was significantly higher than the median UIC from the prefortification period (73 µg/L; P < 0.001) (Box 1). There was a reduction in the proportion of UIC results under 50 µg/L after voluntary fortification compared with prefortification, from 17.7% to 9.6% (P < 0.001), and a further reduction to 3.4% after mandatory fortification (P = 0.001) (Box 2). Box 3 shows the progressive improvement in median UIC results from Tasmanian urinary iodine surveys of schoolchildren over the iodine fortification intervention periods (prefortification, voluntary fortification and mandatory fortification).

Discussion

Our findings show a progressive improvement in the iodine status of Tasmanian schoolchildren over the iodine fortification intervention periods (from prefortification to voluntary fortification and mandatory fortification). This study also shows the specific benefit of a mandatory versus a voluntary approach to iodine supplementation.

Population iodine status is routinely assessed by measuring UIC, whereas determining the appropriate level of fortification in food relies on estimates of dietary intakes. The relationship between dietary iodine intake and UIC is usually linear — an increase in dietary intake results in a comparable increase in urinary excretion.7 The 56 µg/L increase in median UIC from prefortification to mandatory fortification is consistent with the predicted 52 µg/d increase in the mean dietary iodine intake for children aged 9–13 years, estimated by dietary modelling before the introduction of mandatory iodine fortification.8

This is the first study to specifically evaluate the adequacy of iodine nutrition in an Australian population after the introduction of mandatory iodine fortification of bread in 2009. The results are of significance to the Australian population more broadly, as the magnitude of effect of mandatory supplementation on the national population is likely to be similar to that observed in Tasmania.

In the 2004 National Iodine Nutrition Study, a survey of schoolchildren found that Western Australia had the highest median UIC of all Australian jurisdictions, at 142.5 µg/L.9 Extrapolating the magnitude of increase in UIC from our surveys to that observed in WA would result in a UIC just under 200 µg/L (56 µg/L + 142 µg/L), which is at the upper level of the optimal range.1

To facilitate comparisons, the sampling method used in our 2011 survey was modelled on the method used in the surveys conducted during the period of voluntary fortification.5 Classes that included fourth-grade children were originally chosen as the sampling frame to be consistent with World Health Organization guidelines for assessing population iodine status.1 Staff from the Department of Education Tasmania advised that this age group would be sufficiently independent to provide a urine sample, while minimising self-consciousness likely in older children. It is yet to be seen whether the observed impact of mandatory fortification is representative of other population groups, such as adults. Published surveys of prefortification UIC of Melbourne adults offer a useful baseline for this purpose.10 The Australian Health Survey 2011–2013 is measuring UIC in adults and children across Australia, and we anticipate this will provide further evidence of the iodine status in the Australian population.

Comparisons with prefortification surveys should be interpreted with the knowledge that there were subtle differences in sampling methods. A two-stage stratified sampling procedure was adopted in the prefortification period (1998–2000), where schools and then students from within schools were randomly selected. Subsequent surveys used a one-stage cluster sampling method with classes that included fourth-grade students as the sampling frame. These sampling differences are not considered significant and have been discussed elsewhere.5 Any sample bias associated with factors such as socioeconomic status or geographic location is unlikely to affect the results, as an association between UIC and these factors has not been found previously.4

Although the 2011 results are consistent with iodine repletion in the general population, they cannot be generalised to high-risk subgroups such as pregnant and breastfeeding women, whose daily iodine requirements increase by about 40%.11 Prior research in Tasmania has shown persistent iodine deficiency in pregnancy despite the introduction of voluntary iodine fortification.12 Recent evidence suggests that while mandatory iodine fortification may have benefited breastfeeding women, only those consuming iodine-containing supplements had a median UIC in the adequate range.13 Future studies of iodine nutrition should specifically assess the adequacy in these groups. Similarly, ongoing awareness of the recommendation that pregnant and lactating women take 150 µg of supplemental iodine per day should not be overlooked, particularly in those parts of Australia where marginal iodine deficiency has been previously reported.14,15

Changes to the iodine content of food supply (such as the level of iodine in milk or the level of salt in bread) or shifts in dietary choice (such as a preference for staples other than bread) could jeopardise iodine status in the future.3,16 The value of ongoing vigilance in monitoring population iodine status has been highlighted by previous authors.12,17,18 In addition, monitoring iodine levels in the food supply will be required to inform future adjustments to the mandatory iodine fortification program.

1 Urinary iodine concentration (UIC) of Tasmanian schoolchildren by year and intervention period

Intervention period

Year (n)

Median UIC (95% CI)

IQR

Proportion of samples with UIC < 50 µg/L (95% CI)


Prefortification*

1998 (124)

75 µg/L (72–80 µg/L)

60–96 µg/L

16.9% (10.3%–23.6%)

2000 (91)

72 µg/L (67–84 µg/L)

54–103 µg/L

18.7% (10.6%–26.7%)

Voluntary fortification*

2003 (347)

105 µg/L (98–111 µg/L)

72–147 µg/L

10.1% (6.9%–13.3%)

2004 (430)

109 µg/L (103–115 µg/L)

74–159 µg/L

10.0% (7.2%–12.8%)

2005 (401)

105 µg/L (98–118 µg/L)

72–155 µg/L

10.5% (7.5%–13.5%)

2007 (304)

111 µg/L (99–125 µg/L)

75–167 µg/L

7.2% (4.3%–10.1%)

Mandatory fortification

2011 (320)

129 µg/L (118–139 µg/L)

95–179µg/L

3.4% (1.4%–5.4%)


IQR = interquartile range. * Based on 1998–2005 surveys.5

2 Comparison of urinary iodine concentration (UIC) of Tasmanian schoolchildren across intervention periods

Fortification intervention period (n)

Median UIC (95% CI)

Difference from prefortification period

P* compared with results from prefortification period

P* compared with results from
voluntary
fortification period

Proportion of
samples with UIC < 50 µg/L
(95% CI)

Odds ratio (P)
compared with results from
prefortification period

Odds ratio (P)
compared with results from voluntary
fortification period


Prefortification (215)

73 µg/L (70–79 µg/L)

17.7% (12.6%–23.8%)

1

Voluntary fortification (1482)

108 µg/L (102–111 µg/L)

+ 35 µg/L

< 0.001

9.6% (8.1%–11.1%)

0.49 (< 0.001)

1

Mandatory fortification (320)

129 µg/L (118–139 µg/L)

+ 56 µg/L

< 0.001

< 0.001

3.4% (1.4%–5.4%)

0.17 (< 0.001)

0.34 (0.001)


* Difference in medians compared using Kruskal–Wallis χ2 (corrected for ties) with post-hoc Wilcoxon rank-sum test. Difference in proportion of samples with UIC < 50 µg/L estimated by logistic regression.

3 Median urinary iodine concentration (UIC) of Tasmanian schoolchildren from 1998 to 2011

Evidence-based policies for the control of influenza

Influenza vaccines can prevent serious outcomes of infection, but vaccine policies should be based on the best contemporary evidence

In this issue of the Journal, two studies draw attention to potential difficulties in protecting vulnerable people from influenza infection. In the first study, Wiley and colleagues report a 27% uptake of influenza vaccine by pregnant women in three hospitals in New South Wales in 2011, with differences in uptake attributable to how the vaccine was promoted and the ease of accessing it.1 Influenza vaccination of pregnant women is an important issue that was highlighted during the 2009 pandemic. In Australia, the risk of hospitalisation with pandemic (H1N1) 2009 influenza for pregnant women compared with non-pregnant women aged 15–44 years was increased by about fivefold2 and the risk of admission to intensive care, by about sevenfold.3 The World Health Organization recently recommended influenza vaccination for pregnant women as the highest priority for countries considering initiation or expansion of programs for seasonal influenza vaccines.4

In the second study, Macesic and colleagues estimated that 4% of almost 600 cases of laboratory-proven influenza in sentinel Australian hospitals in 2010 and 2011 were acquired in hospital.5 Although the estimated risk was low, the outcome could be severe. One patient with end-stage respiratory disease died, and 23% of patients required intensive care. Hospitals should be safe places, and acquiring influenza as an inpatient is potentially preventable. Prevention involves five arms: cohorting or isolation of patients with suspected infection; studious attention to respiratory precautions and hand hygiene; preventing staff and visitors with respiratory symptoms from entering the facility; vaccination of everyone with patient contact, including health care workers, visitors and family members; and vaccination of patients.

Influenza vaccination is recommended in the Australian immunisation handbook for patients at increased risk of an adverse outcome from influenza infection, and is funded for these patients.6 All the patients with hospital-acquired influenza in Macesic et al’s study had comorbidities that rendered them eligible for free influenza vaccine, but only 36% had been vaccinated.5 To protect themselves and their patients, the Australian immunisation handbook also recommends that health care workers be vaccinated.6

Vaccination can help prevent influenza infection in pregnant women and hospital inpatients, but it is not a perfect intervention. For many years it has been suggested that trivalent inactivated influenza vaccine provided protection to 70%–90% of participants in randomised controlled trials (RCTs).4,6 However, a more recent estimate from a meta-analysis of vaccines licensed in the United States suggested that protection for adults under the age of 65 years, even in the controlled environment of the RCT, was around 59%.7 A large RCT conducted in Australia and New Zealand during the 2008 and 2009 influenza seasons estimated efficacy as 42% (95% CI, 30%–52%) against all strains of influenza, including the pandemic (H1N1) 2009 influenza virus, while the point estimate for matched strains was 60%. The higher efficacy against vaccine strains matched to circulating strains is expected.8 Participants in RCTs are generally young and healthy, whereas influenza vaccines are funded in Australia for people who are older or have underlying medical conditions, for whom the vaccine may be less effective.

How then do trial results compare with estimates from the field? Recent observational studies from Australia of influenza vaccine effectiveness in routine practice are broadly supportive of estimates from the trials.7,8 Over the period from 2007 to 2011, but excluding the pandemic year of 2009, influenza vaccine effectiveness among adults aged 20–64 years presenting to sentinel general practices in Victoria was estimated as 62% (95% CI, 43%–75%).9 In a study of sentinel Australian hospitals in 2010, vaccine effectiveness against hospitalisation with confirmed pandemic (H1N1) 2009 influenza, the dominant circulating virus that year, was estimated as 49% (95% CI, 13%–70%).10

Trial results can legitimately be compared with Australian observational studies because most of the vaccines used in the trials were trivalent inactivated vaccines, the only type of vaccine currently licensed in Australia, and the end points in all studies were laboratory-confirmed, medically attended influenza. There are no specific vaccine effectiveness estimates for pregnant women or health care workers in Australia, but it is not unreasonable to expect effectiveness for these two groups to be in the range for other adults.

It is important to continue to promote influenza vaccination as a cornerstone of protection against infection and adverse outcomes, but it is also important not to overstate the effectiveness of current inactivated vaccines. Estimates that are not based on contemporary evidence have the potential to undermine confidence in the vaccine. Although not ideal, a vaccine that may protect around half of all recipients from an infection requiring medical attention (a general practitioner visit or hospital admission) can definitely be recommended. Vaccination remains the single best option for controlling influenza, but improved vaccines will make policy setting and promotion of vaccination much easier.11,12

Challenges in regulating influenza vaccines for children

Lessons need to be drawn from the assessment and licensing of influenza vaccines in previous years

In April 2010, Australia suspended paediatric influenza vaccinations as a result of febrile convulsions associated with seasonal trivalent influenza vaccine (TIV). Epidemiological investigations have established that the increase in febrile reactions was limited to one of three brands of TIV used in Australia that year — CSL Biotherapies Fluvax or Fluvax Junior (CSL TIV), registered as Afluria in the United States and Enzira in the United Kingdom.13 Health authorities in Australia estimated that the risk of febrile convulsions in children aged 6 months to 4 years after vaccination with CSL 2010 TIV ranged from 3–10 per 1000 vaccinated.1,3 This figure is remarkable because TIV has an excellent safety record in children and before 2010 was only rarely associated with febrile convulsions. The largest published population-based study found only one febrile convulsion after TIV vaccination of 45 356 children aged 6–23 months,4 giving a risk estimate of 2.2 convulsions per 100 000. Nonetheless, age-related differences in the reactogenicity of influenza vaccines and the potential for influenza vaccines to cause febrile reactions in children had been recognised for decades.5 Reviewing the regulatory history of the CSL influenza vaccine for children (Box) suggests there may be opportunities for improving the licensure of paediatric influenza vaccines.

Licensure of CSL Fluvax for children

In 2002, Australia’s Therapeutic Goods Administration (TGA) registered the thiomersal-free CSL TIV Fluvax for use in persons aged 6 months and older.6 Between 2004 and 2005, CSL TIV was approved for paediatric use in Sweden, the UK and Denmark despite a European Public Assessment Report which indicated that, at the time of initial registration in Europe, “no controlled clinical studies had been conducted in infants, young children, or young adolescents”.7 The assessment also acknowledged that the extent of CSL TIV use among paediatric populations at the time was “not well understood”.7

The first paediatric study of CSL TIV began in March 2005 as a post-licensure commitment to the Swedish Medical Products Agency.7,8 Conducted in Australia, the study involved vaccinating 298 children less than 9 years of age with two doses of the 2005 formulation of TIV.9 In the following year, 273 of the children received a “booster” dose using the 2006 TIV formulation, which had different influenza A(H3N2) and B vaccine virus antigens.9,10 The study results, published in 2009, showed a marked difference between the risk of reported fever, depending on the annual formulation of the CSL TIV administered.9 Among children less than 3 years of age, the proportion experiencing fever was 22.5% after vaccination with the CSL 2005 TIV formulation and 39.5% after vaccination with the 2006 formulation.9 For children aged 3–8 years, the proportions experiencing fever were also elevated in 2006, rising from 15.6% and 8.2% for doses 1 and 2 in 2005, respectively, to 27.0% after vaccination with the 2006 TIV formulation.9 Reanalysis of the published data shows that the increase in the proportion of children with fever after vaccination in 2006 compared with either vaccine dose in 2005 was statistically significant for both age groups (P < 0.05). In addition, two serious adverse events were reported from this study. Both reports were of fever and vomiting on the evening of vaccination with CSL 2006 TIV, with one of the children also experiencing a febrile convulsion — a clinical picture similar to the adverse events subsequently associated with CSL TIV in 2010.3,6,9

2007 US FDA finding: paediatric safety not established

In March 2007, CSL submitted a biologics license application (BLA) to the US Food and Drug Administration (FDA) requesting approval to market TIV for adults in the US. To enhance the safety database, CSL also provided the FDA with data from the 2005–2006 Australian paediatric study. The FDA concluded that the Australian paediatric study had not identified any unusual safety concerns,8 although a separate assessment by FDA statisticians conducted later noted the small sample size and lack of comparator arm, and stated “this study was not designed to test any hypothesis”.11 In September 2007, the FDA wrote that “ . . . the pediatric study was not controlled for safety. Therefore, at this time the data will not be considered for approval in a pediatric population”.8 Accordingly, the prescribing information for the CSL TIV formulation distributed in the US for 2008–2009 stated that the “safety and effectiveness in the pediatric population have not been established”.12

The US Pediatric Research Equity Act of 2003 requires that clinical studies are conducted in children for biological products under development.8,13 As part of the accelerated approval of CSL TIV for adults in the US, CSL agreed to conduct the first randomised controlled trial of CSL TIV in children, which was scheduled to begin in August 2009 (CSLCT-USF-07-36).8

In the interim, two developments prompted the FDA to reassess the paediatric indication for CSL TIV without waiting for the results from this study. The first was a decision in 2008 by the Advisory Committee on Immunization Practices to expand the recommendation for annual influenza vaccination to include children 5 to < 18 years of age.6 The second was the onset of the influenza A(H1N1) pandemic in April 2009. Accelerated approval of CSL’s seasonal influenza vaccine for children would facilitate licensure of CSL’s monovalent pandemic vaccine for children because

an approved pediatric indication [for CSL TIV] would permit approval of the H1N1 vaccine in children as a strain change as has been done for adults, and would obviate the need for an Emergency Use Authorization in the pediatric population.6

While acknowledging the limitations of the existing data, the FDA concluded that “due to constraints related to the influenza shortage in 2004 and current concerns related to the circulating H1N1 pandemic swine flu strain, less stringent criteria for submission for BLA is acceptable”.11 Ultimately, the FDA determined that “extenuating circumstances have changed the risk benefit ratio for the pediatric indication” and recommended that CSL TIV “be granted approval in children 6 months to < 18 years of age because of newly recognized potential clinical benefit that outweigh known risks”.6

Data emerge from the first controlled trial of the CSL TIV in children

The paediatric trial of CSL 2009–2010 TIV compared with a US-licensed comparator (NCT00959049) was completed in May 2010, just weeks after suspension of childhood influenza vaccinations in Australia. Data from this unpublished study are available on the US National Institutes of Health website, albeit without statistical analysis.14 Independent examination of the data showed that children aged 6 months to < 3 years who received a first dose of CSL TIV experienced fever (≥ 37.5°C axillary or ≥ 38.0°C oral) nearly three times as often as those receiving the comparator (37% v 14%, respectively; P < 0.00005).14 In addition, children in this age cohort were significantly more likely to experience severe fever (> 39.5°C axillary or > 40.0°C oral; P < 0.05), irritability (P < 0.00005), loss of appetite (P < 0.005), or severe nausea/vomiting (P < 0.005) after receiving a first dose of CSL TIV compared with those receiving the comparator vaccine. Children aged 3 to < 9 years who received a first dose of CSL TIV were significantly more likely to experience fever (22% v 9%, respectively; P < 0.0005) and malaise (29% v 13%; P < 0.0005).14 The seasonal TIV formulation used for this trial was antigenically distinct to the formulation subsequently associated with severe febrile reactions in Australia in 2010 (specifically, the 2009–2010 northern hemisphere vaccine did not contain pandemic 2009 H1N1 strain antigens and used a different H3N2 vaccine strain).10 Taken together, data from this study and the experience in Australia in 2010 indicate that, compared with other contemporaneous TIVs, CSL TIV was associated with a higher risk of fever in children over two consecutive manufacturing seasons using different H1N1 and H3N2 viral strains.3,14 The findings from the 2005–2006 Australian study extend this observation, suggesting that CSL TIV may have been associated with a high risk of fever in children, at least intermittently, in other years.9

In hindsight, it would appear that the US decision to grant a paediatric indication for CSL TIV in 2009 without the benefit of data from a controlled paediatric clinical trial may have led to an optimistic assessment of the risks and benefits of this vaccine. The CSL TIV associated with severe febrile reactions in the southern hemisphere in 2010 was antigenically equivalent to that distributed in the northern hemisphere for the 2010–2011 influenza season.10 If Australia had not identified the safety signal in April 2010 — an event which led directly to health authority recommendations in the US and UK that CSL TIV not be administered to children aged < 5 years for the upcoming 2010–2011 influenza season — it is possible that febrile adverse reactions associated with this formulation might have been observed among children in those countries.15,16

It is nearly 3 years since the use of CSL TIV in young children was suspended, and laboratory investigations undertaken by CSL have not yet identified a definitive cause of the adverse reactions.17 CSL has acknowledged, however, that the increase in febrile adverse events in children in 2010 may have been due, at least in part, to “differences in the manufacturing processes used to manufacture CSL TIVs compared to other licensed TIVs on the market”.17 Suboptimal virus splitting or other mechanisms related to CSL’s use of deoxycholate have been suggested as possible contributing factors.18

Last year a joint working group of the Australian Technical Advisory Group on Immunisation and the TGA reviewed data on adverse events associated with different TIV brands among persons over 10 years of age and concluded that “the safety profile of many currently registered inactivated influenza vaccines is likely to differ and evidence to support this exists”.19 This observation underscores that assumptions regarding the safety of influenza vaccines may not be transferable across brands.

Lessons learned

CSL TIV was licensed for use in children in a number of countries without the benefit of data from controlled paediatric clinical trials. The results of the only paediatric randomised controlled trial to date, conducted 7 years after thiomersal-free Fluvax was licensed for children in Australia, and Australia’s experience in 2010 demonstrate the risks inherent with this approach. Ideally, adequately powered, controlled paediatric studies should be conducted before a vaccine is licensed for children.20 Regulatory decisions on a paediatric indication for a vaccine can be challenging and are even more difficult if the safety profile of the vaccine has not been established. If circumstances do not permit a rigorous assessment of a vaccine’s safety before licensure, as could be argued for the US in 2009 with an influenza pandemic approaching, this important caveat should be communicated to providers and consumers. In addition, given that the antigenic composition of influenza vaccines often changes from year to year, comprehensive postmarketing surveillance for adverse events is essential to maintain public trust and ensure the long-term success of paediatric influenza vaccination programs.21

Influenza vaccine and serious febrile reactions in children: timeline of key events

2002

Nov

  • Australia registers thiomersal-free CSL Fluvax trivalent influenza vaccine (TIV) for use in infants and young children

    • No prior paediatric safety or efficacy studies of Fluvax have been conducted

2004

Oct

  • Sweden licenses CSL TIV for use in children aged 6 months or older

    • CSL makes a postapproval commitment to conduct a paediatric study

2005

Mar

  • The first paediatric study of CSL Fluvax commences with 298 children

Apr

  • The United Kingdom licenses CSL TIV for use in children aged 6 months or older

Jun–Dec

  • Denmark and the Netherlands license CSL TIV for use in children aged over 6 months

2006

Jun

  • The first paediatric study of CSL Fluvax finishes

    • Two study participants experience serious adverse events “possibly” related to the vaccine; one is a febrile seizure

2007

Sep

  • United States Food and Drug Administration (FDA) gives CSL TIV accelerated approval for adults

    • FDA determines that safety and effectiveness of CSL TIV in the paediatric population have not been established

    • CSL commits to a paediatric trial using another US-licensed TIV as a control

2008

Mar

  • US expands influenza vaccinations to include all children 5–18 years of age

2009

Apr

  • World Health Organization signals that an influenza pandemic is imminent

Sep

  • First randomised controlled trial of CSL TIV in children begins in the US

Nov

  • FDA licenses CSL TIV for children citing pandemic and vaccine shortage concerns

2010

Apr

  • CSL 2010 TIV is associated with febrile seizures in up to 1 in 100 vaccinated children in Australia

    • Australia temporarily suspends all influenza vaccinations for children aged < 5 years

Jul

  • UK recommends not using CSL TIV in children < 5 years for the 2010–2011 season

Jul

  • Australia recommends not using 2010 CSL TIV for children < 5 years

Aug

  • US recommends not using CSL TIV in children < 5 years for the 2010–2011 season

2011

Jul

  • Results from first randomised controlled trial of CSL TIV in children become publicly available

    • CSL 2009–2010 TIV is associated with higher risk of fever and other adverse events than the US-licensed TIV comparator

2012

Oct

  • CSL publishes laboratory investigations, but the cause of the adverse events in 2010 remains undetermined

Characteristics of the community-level diet of Aboriginal people in remote northern Australia

Dietary improvement for Indigenous Australians is a priority strategy for reducing the health gap between Indigenous and non-Indigenous Australians.1 Poor-quality diet among the Indigenous population is a significant risk factor for three of the major causes of premature death — cardiovascular disease, cancer and type 2 diabetes.2 The 26% of Indigenous Australians living in remote areas experience 40% of the health gap of Indigenous Australians overall.3 Much of this burden of disease is due to extremely poor nutrition throughout life.4

Comprehensive dietary data for Indigenous Australians are not available from national nutrition surveys or any other source. Previous reports on purchased food in remote Aboriginal communities are either dated,5 limited to the primary store5,6 and/or short-term or cross-sectional in design.7,8 These studies have consistently reported low intake of fruit and vegetables, high intake of refined cereals and sugars, excessive sodium intake, and limited availability of several key micronutrients.

The aim of this study was to examine characteristics of the community-level diet in remote communities in the Northern Territory over a 12-month period.

Methods

We examined purchased food in three remote communities in relation to:

  • food expenditure;

  • estimated per capita intake;

  • nutrient profile (macronutrient contribution to energy) and nutrient density (nutrient per 1000 kJ) relative to requirements; and

  • major nutrient sources.

We collected information on community size, remoteness and availability of food in each community as well as community dietary data including all available foods with the exception of traditional foods and foods sourced externally to the community. Alcohol was prohibited in the three study communities at the time of our study.

Monthly electronic food (and non-alcoholic beverage) transaction data were provided by the community-owned store and independent stores in the three communities for July 2010 to June 2011. Food order data were collected from food suppliers for all food services in each of the three communities. All food and beverage items with their accompanying universal product code or store-derived product code, quantity sold, and dollar value (retail price) were imported to a purpose-designed Microsoft Access database9 and linked to the Food Standards Australia New Zealand Australian Food and Nutrient survey specific (AUSNUT 1999 and AUSNUT 200710) and reference (NUTTAB 06) databases (NUTTAB 06 has now been replaced by NUTTAB 2010). Folate dietary equivalent levels per 100 g were modified for bread and flour to equal NUTTAB 2010 levels since mandatory fortification was introduced. Unit weights were derived for all food and drink items and multiplied by the quantity sold to give a total item weight. Food items were categorised into food groups derived from the Australian Food and Nutrient Database AUSNUT 07 food grouping system10 and beverages were further categorised to provide a greater level of detail (Appendix 1). Several nutrient compositions for items not available in these databases were derived from the product’s nutrition information panel, which is mandatory on all packaged foods in Australia, or from standard recipes. Nutrient availability was derived for 21 nutrients. Energy and nutrient content per 100 g edible portion was multiplied by the edible weight (primarily sourced from Australian Food and Nutrient data10) of each of the food and beverage items (adjusted for specific gravity to convert mL to g weight) to derive total energy and nutrient content for each food group.

Completeness of data and accuracy were ensured by: a check on monthly time periods reported, follow-up with providers where a food description or unit weight was not available or where a discrepancy was noted; checking of unit weights against unit dollar value; and a second person checking the matching of foods with nutrient composition data and assigning of food groups.

Data analysis

Data were grouped by community, food source, month and food group and transferred to Stata 10 (StataCorp) for analysis. Data for all food sources were combined (community food supply) and the average monthly and per capita daily weight and dollar value of each food group were calculated. Mean monthly and daily food weights were assumed to approximate mean monthly and daily dietary intakes for the data period.

The populations of each of the three remote communities and the three communities combined were estimated based on the total amount of energy provided through the community-level diet, and, assuming energy balance, were divided by the estimated weighted per capita energy requirement for each of the communities and the three communities combined. The estimated total population was verified against Australian Bureau of Statistics (ABS) estimates.11 The weighted per capita energy requirement was determined for each community using the estimated energy requirement for each age group and sex, as stated in the Nutrient Reference Values for Australia and New Zealand12 (with a physical activity factor of 1.6 [National Health and Medical Research Council — light activity13]) in conjunction with the population age and sex distribution as determined by the 2006 ABS population census for each of these three communities.

Nutrient density was calculated for each nutrient by dividing the total nutrient weight by the energy value of the community food supply. Population-weighted nutrient density requirements were derived using estimated average requirements (EARs).12 The EAR for nutrients is stated as a daily average and varies by age and sex. EARs are estimated to meet the requirements of half the healthy individuals of a particular age group and sex and are used to assess the prevalence of inadequate intakes at a population level.12 A nutrient density level below the weighted EAR per 1000 kJ was considered insufficient in meeting the population’s requirements.

Adequate intake (AI) values were used for nutrients for which no EAR was available (potassium, dietary fibre and vitamin E α-tocopherol equivalents). The midpoint of the AI range for sodium was used. Macronutrient profiles (the proportions of dietary energy from protein, total fat, saturated fat, carbohydrate and total sugar) were compared with acceptable macronutrient distribution ranges.14 Major food sources were defined as foods contributing 10% or more of a specific nutrient.

Ethics approval was provided by the Human Research Ethics Committee of Menzies School of Health Research and the Northern Territory Department of Health and the Central Australian Human Research Ethics Committee. Written informed consent was gained from all participating communities, food businesses and food services.

Results

The estimated total population was 2644. Community populations ranged in estimated size from 163 to 2286 residents of mostly Aboriginal ethnicity and were comparable with regard to age and sex distributions.15 The distance from each community to the nearest food wholesaler ranged from 130 km to 520 km. Variation between the communities in remoteness, size, and number of food outlets is shown in Box 1.

Expenditure patterns

Average per capita monthly spending on food and non-alcoholic beverages in communities A, B and C, respectively, was $394 (SD, $31), $418 (SD, $82) and $379 (SD, $80). About one-quarter of all money spent on food and beverages was on beverages (combined communities, 24.8%; SD, 1.4%), with soft drinks contributing 11.6%–16.1% to sales across the three communities (combined communities, 15.6%; SD 1.2%) (Appendix 2). This compares to less than 10% in total spent on fruit and vegetables in each of the three communities (7.3%, 9.1% and 8.9%; combined communities, 2.2% [SD, 0.2%] on fruit and 5.4% [SD, 0.4%] on vegetables) (Appendix 2).

Per capita daily intake

Based on population estimates, there appeared to be differences in the daily per capita volume of many food groups between community A compared with communities B and C and less notable differences between communities B and C (Appendix 3).

On average, per capita daily intake of beverages (including purchased water and liquid tea) was 1464 g (SD, 130.5 g) with sugar-sweetened soft drinks comprising 298–497 g across communities (Appendix 3). Liquid tea constituted most of the remaining beverage volume. Daily per capita fruit and vegetable intake in community A (122 g) was just over half that of communities B (222 g) and C (247 g) (Appendix 3).

Macronutrient profile

For community A, the proportion of dietary energy as carbohydrate was at the higher end of the recommended range; for communities B and C it was within the recommended range. Sugars contributed 25.7%–34.3% of the total proportion of dietary energy across the three communities (Box 2), 71% of which was table sugar and sugar-sweetened beverages. The proportion of dietary energy from fat was within the acceptable range for each community, and lower in community A compared with communities B and C. The proportion of dietary energy as saturated fat was within the recommended range for community A and higher than recommended for communities B and C. The proportion of dietary energy as protein was lower than the recommended minimum in all three communities (Box 2).

Micronutrient density

With reference to weighted EARs (or AIs) per 1000 kJ and nutrients measured, in all three communities the diet was insufficient in calcium, magnesium, potassium and fibre (Box 3). Iron, vitamin C and folate equivalents were all around double the weighted EAR per 1000 kJ and niacin equivalents were nearly four times the EAR (Box 3). Sodium was the nutrient provided in the greatest excess, at nearly six times the midpoint of the average intake range (Box 3). Most nutrient density values appeared lower in community A compared with communities B and C (Appendix 4).

Major nutrient sources

In all three communities, white bread fortified with fibre and a range of micronutrients was a major source of protein, fibre, iron, sodium, calcium, dietary folate, potassium, magnesium and B-group vitamins (Appendix 5). Sugar and sugar-sweetened beverages provided 65%–72% of total sugars (Appendix 5). Bread, salt and baking powder were major sources of sodium in all three communities. Major food sources of all nutrients were similar across the three communities (Appendix 5).

Discussion

Our comprehensive assessment of the community diet averaged over a 12-month period showed a high intake of refined cereals and added sugars, low levels of fruit, vegetables and protein, limiting key micronutrients, and excessive sodium intake. Our findings confirm recent and past reports of dietary quality in remote Aboriginal communities.5,8 We report food expenditure and dietary patterns that are similar to those reported previously using store sales data alone,5,6,8 as are the limiting nutrients (protein, potassium, magnesium, calcium and fibre).8

A striking finding from our study is the high expenditure on beverages and corresponding high intake of sugar-sweetened beverages coupled with low expenditure (and low intakes) of fruit and vegetables.

The level of sugar-sweetened soft drinks reported for communities B and C is in line with what we have previously reported for 10 NT communities from store data alone.6 The apparently substantially higher per capita volume reported for community A warrants further investigation, which could include examining variation in regional consumption, food delivery systems and food outlets. Similarly high per capita consumption of sugar-sweetened beverages has been reported among Aboriginal and Torres Strait Islander children in regional New South Wales (boys, 457 g/day; girls, 431 g/day) and for children at the national level (364.7 g/day).18,19 The high volume of tea purchased is also of concern, as tea is generally consumed as a sugar-sweetened beverage.

The low daily fruit and vegetable intake reported for the three study communities (which on average equated to 0.3 to 0.7 serves of fruit and 1.1 to 2.1 serves of vegetables) is in range with the reported average of 0.4 serves of fruit and 0.9 serves of vegetables per person per day sold through 10 NT community stores in 2009,6 but lower than intakes self-reported among other Aboriginal populations in remote Queensland and regional NSW.18,20,21 Our estimates do suggest improved intakes compared with the low levels of fruit and vegetable intake reported nearly three decades earlier for six remote NT communities.5 Caution needs to be applied in making comparisons with past studies owing to use of different methodologies. It has been estimated that increasing fruit and vegetable consumption to up to 600 g per day could reduce the global burden of ischaemic heart disease and stroke by 31% and 19%, respectively.22 The benefits for the Indigenous population are likely to be much greater, considering their currently low intake of fruit and vegetables and high burden of disease.

A further disturbing aspect of the diet is that fibre-modified and fortified white bread is providing a large proportion of key nutrients, including protein, folate, iron, calcium and magnesium, and unacceptably high levels of sodium. Similarly, among Aboriginal and Torres Strait Islander children in regional NSW, bread was also reported to be a major dietary source of energy, salt and fibre.18 It is alarming that white bread is providing a large percentage of dietary protein when it is a poor protein source. Considering the high-quality protein foods traditionally consumed by Aboriginal Australians,23 this apparent shift to a low-protein and high-carbohydrate diet needs investigation. Traditional foods, such as fish and other seafood, eggs and meat provide high-quality protein, but are unlikely to be significant at the population level if not accessed frequently and by a substantial proportion of the population.

The extremely high rates of preventable chronic disease experienced among Aboriginal people in remote Australia and the high intake of sugar-sweetened beverages, unacceptably low levels of fruit and vegetables, and limiting essential nutrients, provide a compelling rationale that more needs to be done to improve diet and nutrition. Poverty is a key driver of food choice2426 and although most Indigenous people living in remote communities are in the low income bracket, a standard basket of food costs, on average, 45% more in remote NT communities than in the NT capital.27 People in the study communities spend more on food ($379 to $418 per person per month) compared with the expenditure estimated for other Australians ($314 per person per month with 2.6 persons per household).28 Our study provides the only available estimate of remote community food and drink expenditure that we know of. Household expenditure data are not available for very remote Australia, representing a gap in information on food affordability, a major determinant of health.

Our study highlighted some important differences in dietary quality between the study communities, with the dietary profile for community A being generally poorer. This may be indicative of intercommunity or regional differences, such as community size, number of food outlets, location and remoteness, access to food outlets, level of subsistence procurement and use of traditional foods, climate, housing or water quality, and warrants broader investigation.

As with individual-level dietary assessment, there are limitations in estimating community-level dietary intake. An inherent issue in community-level per capita measures in research is the difficulty of determining the population for the study period, so caution is required in using the values presented here; however, the total population (2644) was verified against ABS predicted estimates for the 2011 Australian remote Indigenous population (2638) and was within 4% of the later released ABS census data collected in 2010 for the three study communities (2535). Further, monthly per capita dietary intake estimations were averaged over a 12-month period and are likely to take into account the fluctuations in population that occur in remote communities seasonally and over time. A strength of our study is that expenditure patterns based on proportional spending, macronutrient profile and nutrient density provide an assessment of dietary quality that are entirely independent of population size estimates. Furthermore, as dietary data are derived from food sales records rather than self-reported data, they provide an objective assessment of diet quality. Limitations in using food sales data as a measure of dietary intake have been reported previously.8 Estimated per capita energy intakes for communities A and B differed by less than 10% from per capita requirements derived from 2010 ABS census population figures, indicating completeness in food sale data. Estimated energy intakes for community C were lower than required but 81% of per capita requirements.

Reports on dietary quality are also limited by the accuracy of food composition databases. For example, the range of nutrients presented for each food in the Australian food composition database varies depending on the analytical data available. Nutrient levels reported in this study are based on currently available nutrient composition data.29

A limitation in assessing the nutritional quality of the community-level diet using purchased food data is the exclusion of traditional food intake. It is assumed that traditional food contributes minimally to community-level dietary intake, as not all families have access to traditional foods and procurement usually does not occur on a regular basis. However, the contribution of traditional food to dietary intake has not been investigated. We recognise it would be important in future studies to quantify the contribution of traditional foods to total food intake. The low expenditure on (and therefore low intake of) high-quality protein foods suggests that either these foods are not affordable, or that possibly these foods are accessed through subsistence procurement. However, mean daily energy intake estimates based on 2010 census data indicate that the great majority of energy required is provided through the imported food supply.

Despite these limitations, this study provides an objective, contemporary and comprehensive assessment of the community-level diet in three remote Indigenous communities without the inherent limitations of individual-level dietary intake assessment. It provides evidence on key areas of concern for dietary improvement in remote Aboriginal communities.

Very poor dietary quality has continued to be a characteristic of community nutrition profiles in remote Indigenous communities in Australia for at least three decades. Significant proportions of a number of key micronutrients are provided as fortification in a diet derived predominantly from otherwise poor-quality, highly processed foods. Ongoing monitoring (through use of food sales data) of community-level diet is needed to better inform community and wider level policy and strategy development and implementation. Low income is undoubtedly a key driver of diet quality. Further evidence regarding the impact of the cost of food on food purchasing in this context is urgently needed and the long-term cost benefit of dietary improvement needs to be considered.

1 Community characteristics

Population, and age and/or
sex distribution*


Community

2006

2010

Estimated population

Distance from food wholesaler; location

Access

Food stores

Food services


A

1697 
(49% male;
703 residents < 18 yrs)

2124 
(50% male)

2286

> 500 km; island in Top End region

Regular daily flight

Community-owned store. Two independent stores

Aged care meals, child care, school canteen, school lunch program, breakfast program

B

250 
(49% male;
94 residents <18 yrs)

210 
(49% male)

202

> 400 km; central desert region

Sealed and unsealed road

Community-owned store

Aged care meals,
school lunch program,
child care

C

217 
(43% male;
73 residents <18 yrs)

201 
(49% male)

163

< 150 km; central desert region

Sealed and unsealed road

Community-owned store

Aged care meals, child care,
school lunch program,
breakfast program


* Based on Australian Bureau of Statistics (ABS) census data.11,15 2644 was derived for the total study population based on the total energy available in the purchased food supply
and the weighted per capita energy requirement based on the total population age and sex distribution. This population size was used for analyses where data for all communities were combined rather than the total of 2651. All three communities are classified by the ABS Australian Standard Geographical Classification (http://www.health.gov.au/internet/otd/publishing.nsf/Content/locator) as RA5 (very remote). ◆

2 Estimated energy availability and macronutrient profile, overall and by community

Energy intake

Community A

Community B

Community C

All communities


Estimated per capita energy intake based on 2010 census population (kJ)

9845

9119

7623

9608

Estimated per capita energy intake, based on estimated energy requirement* (kJ [SD])

9147 (927)

9480 (1644)

9400 (1740)

9212 (856)

Macronutrient distribution as a proportion of dietary energy (% [SD])

Recommended range14

Protein

12.5% (0.3)

14.1% (0.8)

13.4% (0.6)

12.7% (0.3)

15%–25%

Fat

24.5% (0.6)

31.6% (1.5)

33.5% (1.1)

25.7% (0.6)

20%–35%

Saturated fat

9.4% (0.3)

11.6% (0.6)

12.1% (0.3)

9.7% (0.3)

< 10%

Carbohydrate

62.1% (0.8)

53.3% (1.8)

52.1% (1.1)

60.7% (0.8)

45%–65%

Sugars

34.3% (0.8)

28.9% (2.2)

25.7% (1.8)

33.4% (0.7)

< 10%


* Estimated energy requirements were calculated by age group (1–3 years; 4–8 years; 9–13 years; 14–18 years; 19–30 years; 31–50 years; 51–70 years; > 70 years) and sex based on Nutrient Reference Values for Australia and New Zealand, tables 1–3.11 For age 19 to > 70 years, the midpoint height and weight of each adult age group was used. For < 18 years, the midpoint of the estimated energy requirement range across each age and sex category was used. Energy expenditure was estimated at 1.6 basal metabolic rate overall. We estimated 8% of women aged 14–50 years were pregnant and 8% were breastfeeding, based on Australian Bureau of Statistics 2006 births data, table 9.216 and 2006 census data for women aged 13–54 years.15 Recommendation for ‘‘free sugars’’ — all monosaccharides and disaccharides added to foods by the manufacturer, cook or consumer, plus sugars naturally present in honey, syrups and fruit juices.17 

3 Nutrient per 1000 kJ as a percentage of weighted estimated average requirement (EAR) per 1000 kJ,* overall and by community

* Adequate intake values were used for nutrients for which no EAR was available (potassium, dietary fibre, vitamin E α-tocopherol equivalents, sodium).

Improving flu prevention posters and reducing the risk of infection during outbreaks

To the Editor: As winter, and
the flu season, approaches, I would
like to propose some changes to flu prevention posters circulated by New South Wales Health. After living in the United States for some time, I grew accustomed to seeing numerous posters around Boston highlighting the risks of flu and how to prevent
it spreading, especially during the pandemic (H1N1) 2009 influenza outbreak. This preventive measure is important, and this was emphasised
at my daughter’s childcare centre
where all the children and staff were encouraged to learn techniques of stopping the spread of flu and good hygiene.

One aspect of this that stands
out in my mind (mostly because
of my daughter and her constant demonstration of what she was taught at child care) is the practice of coughing or sneezing into your sleeve rather than your hands as an alternative to sneezing into tissues if these were not available (and given the fact that individuals are not usually prepared enough to have a tissue ready to use for a quick reflex cough or sneeze). This practice was recommended by the Centers for Disease Control and Prevention in the US,1 and is shown in the poster from the Minnesota Department of Health (Box 1). However, during the flu season in 2010 in NSW, I was surprised to see similar posters on buses and trains showing coughing into hands as an acceptable method of flu transmission prevention. Box 2 shows a recent poster from NSW Health that carries this message.

I believe that this should be amended in future messages concerning flu transmission prevention by NSW Health or any other Australian health organisation. I am sure that coughing
or sneezing into hands assists in the spread of flu, either through direct contact or by contact with fomites. This risk would be reduced substantially if individuals were educated and informed that coughing or sneezing into their sleeve is a preferred method of preventing flu transmission.

1 Flu transmission prevention poster from the Minnesota Department of Health,2 which recommends sneezing into the sleeve if a
tissue is not available

2 Flu transmission prevention poster from NSW Health in 2012,3 which suggests that coughing or sneezing into hands is acceptable

Shift to earlier stage at diagnosis as a consequence of the National Bowel Cancer Screening Program

Stage at diagnosis is critical in determining the probability of survival with colorectal cancer (CRC). In randomised controlled trials, population screening using faecal occult blood tests (FOBT) results in earlier stage at diagnosis for screen-detected cancers,1 and reduced mortality from colorectal malignancy compared with controls.24 Evaluations of cancer prevention programs with mortality as an end point take many years to complete. However, we know that early stage at diagnosis is linked to better prognosis and reduced mortality from CRC, so stage at diagnosis can serve as a surrogate marker for population mortality, and provides an early signal of program benefit.

After a pilot study in 2003, a faecal immunochemical test (FIT)-based National Bowel Cancer Screening Program (NBCSP) has been progressively rolled out across Australia. Participants in the program receive a free two-sample FIT kit by mail from a central register, collect samples and return them for testing. Results are mailed to participants and their nominated primary care practitioner (PCP). The PCP arranges follow-up of people with positive FIT results.

There is mandatory reporting of CRC in Australia, and the South Australian Cancer Registry (SACR) holds up-to-date records of CRC diagnoses in South Australia, including tumour stage. Thus the data held by the SACR and the NBCSP register records provide an opportunity to evaluate the effect of the NBCSP, as implemented in SA, on CRC stage at diagnosis.

Our primary aim was to determine whether CRCs diagnosed in people who had been invited to the NBCSP were diagnosed at an earlier stage than CRCs diagnosed in people not invited to the program. Our secondary aim was to determine whether downstaging was evident in the subpopulations that participated or that had positive test results in the screening program.

Methods

Patients were eligible for inclusion if they had CRC that had been reported to the SACR with a date of diagnosis between 1 January 2003 and 31 December 2008, and if they were aged 55–75 years at the date of diagnosis. This date and age range ensured inclusion of individuals invited to have a screening test in the NBCSP pilot program in SA (February 2003 to June 2004, with eligible participants aged 55–74 years on 1 January 2003) or in the NBCSP Phase I (22 January 2007 to 30 June 2008, with participants eligible if they turned 55 or 65 years of age in that period).

We compared the stage profiles of eligible patients invited to the NBCSP (invited), those who took up the offer to have a screening test (participant) and those who had positive results in the screening test (positive), relative to the stage profile of the study population excluding the group of interest (all other patients), on an intention-to-screen basis. Patients were allocated to the invited, participant and positive cohorts if their date of diagnosis was between 15 and 365 days from the date of invitation to participate in the NBCSP pilot program or Phase I trial. Finally, to gain some insight into the value of an invitation alone, we compared the stage profiles of patients who were invited to the NBCSP but did not participate in testing with those of patients who were not invited.

CRC stage was defined according to the Australian Clinico-Pathological Staging System (ACPS), with stages graded from A to D in order of increasing disease spread.5 Experienced SACR staff extracted ACPS stage from clinical reports. Where stage data were incomplete, additional information was sought from three public hospital-based cancer registries.

A list of invitees to both the NBCSP pilot program and Phase I trial was obtained from the NBCSP register. The Australian Institute of Health and Welfare carried out data-matching and provided a merged and de-identified dataset with, for each individual, CRC stage at diagnosis, NBCSP invitation status, NBCSP participation status, FIT result, age, sex, socioeconomic status (SES) and remoteness index data. Cohort stage profiles were compared by χ2 analyses. Multinomial logistic regression was performed using Stata version 12 (StataCorp) to control for possible differences between cohorts in age, sex, SES and geographical remoteness.

Ethics approval was obtained from the SA Health Human Research Ethics Committee and the Department of Health and Ageing Departmental Ethics Committee. Additional approvals were obtained from the Epidemiology Branch, SA Health, for access to the SACR; from the Royal Adelaide Hospital, Queen Elizabeth Hospital and Flinders Medical Centre for access to hospital-based registries; and from Medicare Australia for extracting data from the NBCSP register.

Results

We identified 3481 eligible patients with CRC reported to the SACR. Of these, 221 were allocated to the invited cohort. Staging data were available for 87.0% of patients: no data were available for 6.6%, and a further 6.4% had insufficient data to determine ACPS stage. The invited cohort differed significantly from all other patients in age, SES and remoteness (Box 1).

CRC stage according to invitation to the NBCSP

The stage profiles of the invited cohort compared to the rest of the study population (where stage was known) are shown in Box 2. The difference in stage profiles was highly significant (χ2 = 39.5; P < 0.001; Box 3). In the invited group, the percentage of stage A cancers was 34.8%, versus 19.2% in all other patients (P < 0.001). Similarly, the percentage of stage D cancers was 5.4% in the invited group versus 12.4% in all other patients (P = 0.002).

There was a further shift towards earlier stage at diagnosis when the participant group was compared with all other patients (χ2 = 47.7; P < 0.001). In the participant group, the proportion with stage A was almost double that of all other patients (38.8% versus 19.3%; P < 0.001), while the percentage with stage D was 3.0% versus 12.4% in all other patients (P < 0.001). This trend continued when the positive subgroup stage profile was compared with that of all other patients (χ2 = 47.4; P < 0.001). Of those in the participant group, 151/165 (91.5%) returned a positive FIT result through the NBCSP. The percentage with stage A was twice that of all other patients (39.7% compared with 19.3%; P < 0.001), while the percentage with stage D was 2.6% compared with 12.4% in all other patients (P < 0.001).

Analyses that included or excluded patients with unknown cancer stage had no effect on the statistical significance of any of the findings. Multivariate analyses showed that age and SES were significantly associated with stage at diagnosis (Box 4). However, stage A lesions were significantly more likely to be diagnosed than stage B, C or D CRC in the invited cohort relative to all other patients, while controlling for age, SES and remoteness. Stage A lesions were also more likely to be diagnosed in the participant and positive subgroups.

Finally, we compared the stage profiles of patients who were invited to the NBCSP but did not participate with the stage profiles of those who were not invited, to determine whether simply receiving an invitation but not participating led to downstaging. These groups did not differ in stage profile (χ2 = 1.07; P = 0.78).

Discussion

In this intention-to-screen analysis-based evaluation of the NBCSP, we found that CRCs diagnosed in people within 1 year of receiving an invitation to participate in the screening program were on average at an earlier stage than CRCs diagnosed in people who did not receive an invitation. There was a large and highly significant increase in stage A lesions and a corresponding decrease in stage D CRC in those invited to the program relative to the rest of the study population, and the shift towards earlier stage progressively increased in participants in the screening test and in those who were recorded as having positive results in the FIT. Thus CRC downstaging was associated with an invitation to the NBCSP, and the strength of the effect increased in groups that excluded non-participants or people who had negative results in the FIT.

Downstaging was evident regardless of the inclusion or exclusion of patients with missing or insufficient data to determine staging. In addition, in a multivariate model, the relationship between early stage and screening through the NBCSP persisted when possible confounders — age, SES and remoteness — were taken into account.

Earlier detection of CRC has a major impact on survival. United Kingdom data show 5-year relative survival rates of > 90% for Dukes’ stage A cancer and < 7% for Dukes’ stage D (Dukes’ cancer stages are graded A–D in order of increasing spread and metastases).6 As randomised controlled trials have shown that CRCs detected through screening are diagnosed at an earlier stage, and screened populations had reduced mortality relative to control populations,14 it is valid to use downstaging as a surrogate for effect on mortality. The significantly earlier stage profile in patients who participated in the NBCSP should lead to reduced mortality rates in this population. Although at the moment, only a relatively small proportion of the eligible Australian population is offered screening each year, the proposed gradual expansion of the NBCSP should result in greater reductions in CRC mortality over time, assuming that participation rates remain stable or increase.

Our findings are consistent with an earlier report using a hospital-based database of CRC patients, which showed an earlier stage distribution in people self-reporting that they were diagnosed through the NBCSP, compared with stage in symptomatic patients (ACPS stage I, 40% in those diagnosed through the NBCSP versus 14% in non-participants; and stage IV, 3% in those diagnosed through the NBCSP versus 15% in non-participants).7 However, that study did not assess all CRCs diagnosed in the entire population. Further, the study was subject to recall bias and did not analyse results on an intention-to-screen basis. Our study included all cases of CRC reported to the SACR over the periods of implementation of the NBCSP pilot program and Phase I trial, and was based on an intention-to-screen analysis, which has allowed us to avoid sampling, temporal and follow-up quality bias.

Our results are also consistent with overseas evaluations of national CRC screening programs, although the methods used vary depending on the health system. The National Bowel Cancer Screening Programme in England reported a shift towards earlier stage disease in participants compared with patients with cancer diagnosed before the screening program.8 However, it is difficult to determine whether that downstaging represents improvement in practice over time or whether it was a direct result of the program. A decrease in the proportion of more advanced stage tumours for both men and women (but significant only in men) was also seen in the early stages of the English bowel cancer screening program, in a comparison of those who took up the offer of screening with those who did not,9 but the effect was not compared with stage distribution in patients diagnosed outside of the program. The Scottish CRC screening demonstration pilot study found a high proportion of cancers at Dukes’ stage A (almost 50%) when screening with guaiac faecal occult blood testing (gFOBT).10 A similar high proportion of stage A cancers was observed in the French pilot study.11 Unlike the overseas programs, Australia’s NBCSP uses the FIT, and this is the first report of downstaging in a mass screening program using this testing method.

It was important to analyse the program in the first instance on an intention-to-screen basis as an impact at such a level demonstrates the value of the public health program and justifies its implementation.

This study has several strengths. Data were obtained from independently held and well managed databases, and individuals were matched across databases, and then de-identified by an independent third party, before analysis by the investigators. Selection bias was minimised, if not removed altogether, as it is unlikely that there was a difference between the proportions of CRCs reported to the SACR among NBCSP participants and the proportion reported among patients diagnosed outside of the program. All CRC diagnoses in the study population resulted from usual follow-up of patients after testing through the existing public and private primary care systems, and thus there were no systematic biases in the type of follow-up received by each cohort or in the time from referral to diagnosis. Additionally, stage data were extracted and interpreted by experienced SACR staff from histopathology reports. The cohorts examined had similar low proportions of patients with unknown CRC stage because of missing or insufficient data. Finally, this was a whole-of-population study that compared CRC stage at diagnosis of populations differing only in screening invitation status.

Although this is an observational study and it could be argued that other factors might have influenced stage, it was possible to adjust for a number of potential confounders. A second concern was that it was impossible to directly attribute an invitation to the NBCSP to a specific diagnosis of CRC. However, allocating patients to the invited cohort on the basis of a diagnosis between 14 and 366 days from the date of invitation is reasonable, considering the time taken for the clinical steps to final diagnosis after a positive test result; 14 days would appear to be the shortest time to a diagnosis. This timeline from the date of referral for colonoscopy to a diagnosis of CRC is consistent with results of studies across different health systems.12

Conclusion

In the context of a national CRC screening program with normal follow-up care for patients after testing, CRCs were diagnosed at a significantly earlier stage in people who had been invited to the program compared with people not invited to the program. Benefits were even greater in screening participants and those with positive results in the FIT. These results show that CRC screening works in practice and is likely to reduce CRC mortality in Australia.

1 Demographics of patients invited to the National Bowel Cancer Screening Program compared with those of all other patients in the study population


Variable

Invited patients
n
 = 221

All other patients
n = 3260


Sex*

Male

125 (56.6%)

1930 (59.2%)

Female

96 (43.4%)

1330 (40.8%)

Age

55–59 years

55 (24.9%)

525 (16.1%)

60–64 years

21 (9.5%)

671 (20.6%)

65–69 years

94 (42.5%)

864 (26.5%)

70–74 years

43 (19.5%)

988 (30.3%)

> 74 years

8 (3.6%)

212 (6.5%)

Area-level disadvantage by SEIFA quintile

1 (most disadvantaged)

45 (20.4%)

720 (22.1%)

2

27 (12.2%)

763 (23.4%)

3

49 (22.2%)

659 (20.2%)

4

73 (33.0%)

584 (17.9%)

5 (least disadvantaged)

27 (12.2%)

534 (16.4%)

Remoteness index§ (based on Accessibility/Remoteness Index
of Australia)

Urban

189 (85.5%)

2171 (66.6%)

Rural

13 (5.9%)

473 (14.5%)

Remote

19 (8.6%)

616 (18.9%)


SEIFA = Socio-Economic Indexes for Areas.
* χ2 = 0.60; P = 0.434. χ2 = 52.71; P < 0.001. χ2 = 39.41; P < 0.001.
§ χ2 = 34.00; P < 0.001.

2 Distribution of colorectal cancer stage in patients (where stage was known; n = 3026) according to whether they were invited to participate in the National Bowel Cancer Screening Program (NBCSP)*

ACPS = Australian Clinico-Pathological Staging System.
* Only patients with known colorectal cancer stage are included in sample shown; patients with missing or insufficient stage data are excluded.

3 Colorectal cancer stage distribution of patients who were invited to, participated in or had positive test results in the National Bowel Cancer Screening Program (NBCSP), compared with the study population excluding the group of interest

Patients invited to NBCSP
versus all other patients*


Patients who participated in NBCSP versus all other patients


Patients with positive test results in NBCSP versus all other patients


ACPS cancer stage

Invited
n = 221

All other patients
n = 3260

Participant
n = 165

All other patients
n = 3316

Positive
n = 151

All other patients
n = 3330


No stage data

9 (4.1%)

222 (6.8%)

5 (3.0%)

226 (6.8%)

5 (3.3%)

226 (6.8%)

Insufficient data to assess stage

15 (6.8%)

209 (6.4%)

11 (6.7%)

213 (6.4%)

9 (6.0%)

215 (6.5%)

A

77 (34.8%)

627 (19.2%)

64 (38.8%)

640 (19.3%)

60 (39.7%)

644 (19.3%)

B

48 (21.7%)

941 (28.9%)

35 (21.2%)

954 (28.8%)

31 (20.5%)

958 (28.8%)

C

60 (27.1%)

857 (26.3%)

45 (27.3%)

872 (26.3%)

42 (27.8%)

875 (26.3%)

D

12 (5.4%)

404 (12.4%)

5 (3.0%)

411 (12.4%)

4 (2.6%)

412 (12.4%)

ACPS = Australian Clinico-Pathological Staging System.
* χ2 (5) = 39.2; P < 0.001. χ2 (5) = 47.5; P < 0.001. χ2 (5) = 47.4; P < 0.001.

4 Multivariate modelling of invitation to the National Bowel Cancer Screening Program (NBCSP) as a predictor of colorectal cancer stage at diagnosis*

ACPS stage B


ACPS stage C


ACPS stage D


Variables

RRR (95% CI)

P

RRR (95% CI)

P

RRR (95% CI)

P


Invited to NBCSP

0.42 (0.28–0.61)

0.000

0.53 (0.37–0.77)

0.001

0.24 (0.13–0.45)

0.000

Age

1.00 (0.99–1.02)

0.58

0.98 (0.96–0.99)

0.01

0.98 (0.96–1.0)

0.07

Male sex

0.90 (0.74–1.10)

0.30

0.89 (0.72–1.08)

0.22

1.06 (0.82–1.36)

0.65

Area-level disadvantage by SEIFA quintile

2

0.89 (0.66–1.20)

0.46

0.73 (0.53–0.99)

0.04

0.9 (0.63–1.29)

0.58

3

0.79 (0.58–1.07)

0.13

0.86 (0.64–1.17)

0.35

0.57 (0.39–0.84)

0.01

4

0.88 (0.61–1.21)

0.44

0.79 (0.57–1.11)

0.16

0.63 (0.42–0.94)

0.02

5 (least disadvantaged)

0.76 (0.54–1.06)

0.11

0.86 (0.62–1.20)

0.37

0.67 (0.44–1.00)

0.05

Remoteness index (based on Accessibility/Remoteness Index of Australia)

Rural

0.92 (0.69–1.22)

0.57

0.86 (0.64–1.15)

0.31

0.76 (0.52–1.10)

0.15

Remote and very remote

1.06 (0.80–1.39)

0.69

1.12 (0.85–1.47)

0.44

0.96 (0.68–1.34)

0.80

RRR = relative rate ratio. SEIFA = Socio-Economic Indexes for Areas.
* Relative to ACPS stage A base outcomes. Relative to SEIFA 1.

Public reporting of health care-associated infection data in Australia: time to refine

National health care-associated infection indicators require validation, stakeholder input and risk adjustment to reflect quality improvement adequately

In December 2011, the Australian Institute of Health and Welfare (AIHW) launched the MyHospitals website, allowing national reporting of safety and quality indices for Australian hospitals, including health care-associated infection (HAI) indicators.1 Unlike approaches taken in the United States and United Kingdom, public reporting initiatives have lagged in Australia, with challenges being identified in the design and implementation of reporting strategies.2 Specific avenues for improving HAI indicators are now emerging.

The MyHospitals website contains data reported by individual hospitals.1 While all public hospitals submit data, participation by private hospitals is voluntary. Safety and quality indicators include compliance with hand-hygiene practices and rates of Staphylococcus aureus bacteraemia (SAB). Each is compared with a national benchmark — greater than 70% compliance for hand hygiene, and less than 2 days per 10 000 days of patient care for SAB events.

Infection control performance indicators can be broadly categorised as “outcome” or “process” indicators. Outcomes refer to measurable end points, such as hospital length of stay or mortality. SAB events are outcome indicators reported by the MyHospitals website. Currently, all hospitals are compared with a single target rate, and jurisdictional performance is gauged on the aggregate for the relevant state or territory. However, many hospitals (eg, specialist paediatric or cancer hospitals, tertiary referral centres and small rural hospitals) have patient populations that are different from those of most general hospitals. For rational comparisons to be possible, data should be risk-adjusted or stratified to correct for differences in patient casemix across the wide range of health care facilities. This sometimes requires collection of additional data. Determining whether an infection is
(i) present and (ii) health care-associated is not always straightforward, as definitions can be complex and are updated over time.

In contrast to outcome indicators, process indicators encompass a broad range of risk-reduction measures and accepted best practice, such as the use of appropriate antibiotics before surgery and ensuring health care workers’ vaccinations (eg, influenza, hepatitis B) are up to date. Compliance with hand-hygiene practices is a process indicator reported by the MyHospitals website. Process measures generally do not require risk adjustment3 as they are often not patient-specific, but should be applied across the board. However, consistent data collection methods are essential for fair comparison. Process indicators allow unfair comparisons of disparate health care facilities to be more easily avoided and unambiguous targets to be applied.3

The benefits of public reporting in health care include the ability to drive change in practice and reduce risk to admitted patients. Hospital administrators may opt to support and actively resource areas of need, with the aim of maintaining standards and achieving public confidence in their hospital. In the UK, a considerable reduction in
S. aureus
infections has been ascribed, in part, to public release of data,4 together with strong and frequent media exposure. The potential pitfalls of public reporting of HAI indicators include the provision of misleading data,5 unfair comparisons between dissimilar health care facilities,6 the application of unfounded target thresholds,7 and an undue focus by health care facilities on a “rate” rather than on prevention of HAIs,7 ultimately diverting a disproportionate amount of infection-control consultant time from prevention to surveillance activities. Expectations must also be realistic. To date, although quality improvement activity has been enhanced at the hospital level,8 improvement in health care performance or any reduction in HAIs have not been demonstrated.9 Further, public reporting has not been shown to influence consumers’ choice of hospital.

The measures that are currently reported have limitations. For example, a hospital may choose to perform regular hand-hygiene surveillance of high-risk wards with rotation of surveillance in other wards, or surveillance of high-risk wards with auditing of all other wards, or surveillance in the intensive care unit with auditing of all other wards.1 Ideally, the process should be uniform across all health care facilities to enable valid comparison. However, as the sizes of health care facilities in Australia vary widely, this is unlikely to be achieved, and stratification of centres would therefore be a sensible way to compare similar data. Another limitation is that non-standardised data are also used to calculate SAB rates. For instance, the denominator (patient-days) includes psychiatry admissions, which are generally associated with a very low risk of developing SAB. Hospitals have large differences in their number of psychiatry beds, so this can affect calculated SAB rates. Also, no adjustment is made for different rates of use of intravenous catheter or haemodialysis access devices between facilities, despite these devices being associated with an increased risk of health care-associated SAB.

A number of factors are necessary for a valid and beneficial strategy for public reporting of infection control indices.3 First, the choice of a reportable outcome or process must be based on burden of illness, preventability and feasibility of monitoring. Second, the target for surveillance and the audience must be defined, together with the intended objectives of reporting.3 Third, valid methods of data collection, analysis and reporting must be applied.10 This includes consistency with widely accepted case definitions and applying appropriate methods for risk adjustment.3 In the absence of methods for hospital-level risk adjustment, the US Centers for Disease Control and Prevention recommends reporting of HAI data according to specific hospital units (eg, intensive care unit, transplant unit, surgical wards) rather than reporting hospital-wide data.11 Hospitals have also previously been stratified according to casemix or size to enable meaningful comparison of SAB rates.12 To reduce the burden of manual data collection, electronic data sources may be considered to optimise case detection.3 Any targets that are set must be: (i) justifiable in terms of available evidence; (ii) reviewed in a timely fashion; and (iii) revised if any improvement in outcome is achieved. For example, SAB rates below the stipulated target (2 days per 10 000 days of patient care) are now reported by many centres,13 and it could be argued that a lower threshold should now be applied. Finally, feedback about the reporting strategy must be sought from stakeholders. Potential incentives or penalties for participating in a system of public reporting must be communicated, ideally with strategies in place for when targets are not met.

Given the limited time frame since the launch of the MyHospitals website, some of these criteria have not yet been addressed. Interestingly, it has been suggested that bloodstream infections associated with central venous catheters, surgical antibiotic prophylaxis, and influenza vaccination rates among health care workers should be the priority and the minimum data that are publicly reported,3,14 rather than the indices selected by the AIHW (SAB and hand hygiene). The scope of the current strategy does not fully represent the Australian health care system, as participation by private facilities is not mandatory. It is assumed that the target audience is the Australian general public. However, it is not clear how data are to be analysed over time or if tests of statistical significance ought to be applied. In small hospitals, a small number of infections may lead to very high rates, interpretable only with the understanding of sample size effects and accompanying confidence intervals. Reporting on the MyHospitals website takes this into consideration by reporting the number of events, rather than a rate of infection, for hospitals with fewer than 5000 days of patient care per year. Quality-assurance measures to ensure submission of valid data from all surveyed health care facilities have not been formally defined. To improve the quality of captured data, implementation guidelines for surveillance of SAB have been released by the Australian Commission on Safety and Quality in Healthcare, and concerted efforts have been made at a national level to train hand-hygiene auditors by standardised methods.13 However, jurisdictions are ultimately responsible for the quality of submitted data. Published threshold rates represent consensus or expert opinion, rather than evidence-based targets for improved quality of care. It is also unclear if public opinion on the relevance of content, terminology, and educational value has or will be canvassed,15 or if the availability of data is likely to or is intended to influence patient perceptions and decisions about the choice of facility before hospital admission.

While it would be optimal to investigate each of these factors, a formal review may not be practical. As a minimum requirement, measures to ensure the validity of data capture, analysis and reporting are paramount. This has also been identified as a priority after a review of reporting by the English National Health Service.16 If methods are not well founded, the value of data will diminish, stakeholders will not support findings, and the resources hospitals need to collate data may be regarded as unjustified. Hospital care and HAIs are complex, and stakeholders with expertise in public health, infectious diseases, infection control, informatics and epidemiology must all be engaged to ensure that valid data are released.

We welcome public reporting of infection control indicators in Australia, and applaud the efforts taken to date to accomplish the release of data and jurisdictional agreement. However, to improve quality, the strategy requires further development. The focus must now be on validating and enhancing reported indices according to local epidemiology, stakeholder opinion and the needs of the Australian public. Other reportable processes (such as rates of influenza vaccination among health care workers) and outcomes (such as rates of central-line-associated bloodstream infections) may be considered as quality measures, but not before the current reporting strategy for hand-hygiene compliance and SAB events is refined.