×

Regulation and the prevention agenda

Laws should scaffold food and alcohol industry self-regulation to improve unhealthy environments and prevent disease

Australians face crippling rates of chronic disease. The main culprits are behavioural risk factors that reflect unhealthy lifestyles: record levels of obesity, misuse of alcohol, tobacco use and too much salt, fat and sugar in our diets. Healthier and longer lives are possible, but new ways of influencing consumer choices are required. While educational strategies are important for individuals, they rarely succeed as population-wide interventions. Often, the problem is not lack of knowledge but difficulty in translating knowledge into action in unhealthy environments. Regulation and legislation play an important role in shaping healthier choices, but every gain for public health runs up against reflexive beliefs about individual responsibility and the “nanny state”.

Unhealthy lifestyles earn vast revenues for Big Tobacco, Big Alcohol and Big Food. The latter two industries have cleverly populated the policy space with weak, self-regulatory initiatives that are surprisingly complex yet designed for easy compliance. By failing to cover the placement of alcohol advertisements, for example, the Alcohol Beverages Advertising (and Packaging) Code permitted a website that targets children to advertise alcoholic drinks.1 The food industry’s initiatives on responsible advertising contain so many escape clauses that they surely represent a new kind of Swiss cheese.2 Australian parents have learned that they only deserve to have healthy children if they can withstand the pervasive influence of junk-food advertising. But unhealthy industries have not had it all their own way. Big Tobacco was decisively beaten over plain tobacco packaging; and the federal Minister for Health recently announced a front-of-pack food-labelling system that facilitates healthier choices by evaluating the nutritional quality of food.3

Where to from here, in the grand, intergenerational challenge of disease prevention? The report of the National Preventative Health Taskforce offers a goldmine of policy options, many of which are yet to be acted on.4 Prescriptive “command and control”-style laws remain necessary in some areas.5 Government leadership is needed in restricting the sale of tobacco over the internet, abolishing all duty-free allowances for tobacco and requiring health warnings and energy labelling on alcoholic beverages. The alcohol control agenda could be pushed along further by setting a minimum price per standard drink to discourage discounting.

In other areas, we need new solutions in the battle between business-as-usual and statutory controls. Governments can make progress through more subtle and incremental changes, including “legislative scaffolds” to strengthen the performance and credibility of industry-led initiatives. These include:

  • Setting official goals, targets and indicators for progress. Governments need to commit to bold goals and to clearly communicate the contributions that business is expected to make. For example, food industry self-regulation needs to significantly reduce children’s exposure to unhealthy food advertising. This is not happening at present.6

  • Closing off significant gaps in self-regulatory schemes. The principal vehicle for improving diet through food reformulation, the Food and Health Dialogue, currently covers only eight food categories compared with the 80 found in the United Kingdom equivalent.7,8 Regulation needs to restrict the placement of alcohol advertising in media with large youth audiences and to reduce alcohol industry sports sponsorship.

  • Improving the accountability of self-regulatory regimes. Industry codes should be administered independently and closely monitored by government, with public disclosure of industry performance. The credibility of food advertising codes is seriously damaged by letting a trade association administer the schemes, and by letting food companies devise their own nutrition standards as the basis for deciding what products can be advertised to children.

In many areas, including food and alcohol, self-regulatory codes are de-facto regulations intended to forestall more intrusive government action. Where independent monitoring indicates that voluntary initiatives are failing, industry must be left in no doubt that the government will escalate its involvement by introducing regulatory interventions that truly serve public health objectives.

Health outcomes of a subsidised fruit and vegetable program for Aboriginal children in northern New South Wales

In high-income countries, lower socioeconomic status is associated with both higher prevalence of non-communicable diseases and less-healthy dietary intake.1 In this context, promoting healthier nutrition, particularly increasing the intake of fruits and vegetables, has become an important public health priority.2 For those on low incomes, it has been argued that the cost of healthier foods is an important barrier to improving nutrition.3 Though not widely implemented in Australia, food subsidy programs are one strategy with the potential to improve socioeconomic inequalities in dietary intake.

In 2005, a rural Aboriginal community-controlled health service initiated a program for providing subsidised fruits and vegetables to improve nutrition among disadvantaged Aboriginal families. This program aimed to engage families in preventive health care in partnership with the health service while also addressing the barrier of the cost of healthier food choices.

Our previously published evaluation of this program demonstrated improvements in biomarkers of fruit and vegetable intake among children.4 We were also interested in whether there were short-term health benefits of this program, which may have been indicative of enhanced functioning of the immune system due to improved nutritional status.5

Here, we report on whether participation in this fruit and vegetable subsidy program in northern New South Wales was associated with short-term improvements in the health of children in participating families using a number of markers, including any changes in episodes of illness, episodes of common clinical conditions, prescription of antibiotics and the prevalence of anaemia and iron deficiency.

Methods

The fruit and vegetable subsidy program

In 2005, the Bulgarr Ngaru Medical Aboriginal Corporation established a fruit and vegetable subsidy program for low-income Aboriginal families in the Clarence Valley, NSW. The program combined annual health assessments, including dental and hearing check-ups, with receiving a weekly box of subsidised fruits and vegetables. Participating families collected boxes of seasonal fruits and vegetables (worth $40 if 1–4 children, or $60 if ≥ 5 children) at local greengrocers, making a copayment of $5. Complementary seasonal recipes and practical cooking and nutrition education sessions facilitated by dietitians were provided. This is an ongoing program in the Clarence Valley; however, our evaluation involved new families receiving weekly boxes of fruits and vegetables over 12 months with children having health assessments at baseline and after 12 months. The recruitment and baseline assessments were undertaken between December 2008 and September 2009, with follow-up assessments completed between December 2009 and September 2010.

Additional funding enabled the Galambila Aboriginal Health Service in Coffs Harbour and the Giingan Darrunday Marlaanggu Aboriginal Health Clinic at Bowraville in the Nambucca Valley to institute similar fruit and vegetable subsidy programs. These health services also participated in this evaluation study. The availability of and arrangements with greengrocers varied between the communities. In Coffs Harbour, families received vouchers from the health service, which they redeemed at the greengrocer by selecting their own fruits and vegetables. In the Nambucca Valley, the greengrocer was in a different town to the health service, so the health service staff collected and delivered the boxes of fruits and vegetables to families at their homes and collected the $5 contribution from them.

Participants

The participants were low-income (ie, unemployed or receiving pensions) Aboriginal families with one or more children ≤ 17 years of age who were regular patients at the respective health services. Many of the children had an identified nutrition risk (eg, underweight or overweight, chronic or recurrent infections) or presented frequently with episodes of illness to the health service. Parents or carers provided written informed consent and agreed to their children having annual health assessments, including research evaluation assessments. Potential participants were identified by staff using the criteria described above and were invited to join the program. At Bulgarr Ngaru, there was a waiting list of eligible families who wanted to participate, but numbers were limited by available funding.

Data collection and analysis

Retrospective health records audits were used to compare the 12 months before participation in the program with the initial 12 months during participation. These audits were only completed if records for the entire 24 months were available. Health records were reviewed from Aboriginal health services, local hospitals and any other nominated general practice. The number of visits to any health service for illness or preventive health activities, the number of episodes of common clinical conditions, the number of visits to hospital emergency departments and the number of antibiotic prescriptions were compared during each 12-month period.

In addition, each participant had a health assessment, based on the Medicare Benefits Schedule Indigenous Child Health Check, before participation and 12 months after joining the program. For all participants at each health assessment height and weight were measured and non-fasting venous blood samples were obtained to assess haemoglobin and iron status. Height was measured without shoes or thick socks using a Seca 214 portable stadiometer or S&M Instrument Co wall-mounted stadiometer. The participant stood with the heels together and the heels, buttocks and upper part of the back touching the upright of the stadiometer. Children under 3 years who were unable to stand unaided were measured supine using a Seca 210 baby measuring mat on a firm surface. Weight and body fat were measured using a Tanita UM030 Body Fat Monitor wearing light clothing only, with empty pockets and shoes and socks removed. Body fat was measured only for children in the Clarence Valley ≥ 7 years, as per the Tanita recommendations. Children < 2 years who were unable to stand unaided were weighed on a Soehnle Professional Babyscale 7725. Body mass index (BMI) in kg/m2 was calculated for children 2–17 years. Blood samples collected from participants in the Clarence Valley were analysed at the Grafton Base Hospital pathology laboratory. Haemoglobin was analysed on a Roche Diagnostics Sysmex XT-2000i haematology analyser. Serum iron and serum ferritin were analysed on a Roche Diagnostics Cobas Integra 800 chemistry analyser. Blood samples collected in Coffs Harbour and the Nambucca Valley were analysed at Symbion Laverty Pathology, Coffs Harbour. Full blood counts were analysed on a Sysmex XT-2000i haematology analyser. Serum ferritin assays were performed on the Siemens ADVIA Centaur XP automated immunoassay system. Serum iron was measured on the Siemens ADVIA 2400 chemistry system.

Statistical analysis

The mean and 95% confidence interval of changes in the number of health service visits, common clinical conditions and antibiotic use, anthropometric measurements and levels of haemoglobin, iron and ferritin were evaluated in IBM SPSS Statistics, version 19 using a paired sample t test and a general linear model to adjust for sex, age and community. The mean changes in these outcomes were assessed overall and by community, owing to differences in program implementation in each community. The analysis was based on complete data with no imputation for missing values. Based on an international classification of BMI centiles for age,6 the proportions of children who were underweight, normal weight, overweight and obese before participation were compared with the proportions after participation using the Stuart–Maxwell test of marginal homogeneity. The proportions of children with low haemoglobin, ferritin and iron before and after participation were compared using the McNemar test.

Ethics

Ethics approval was obtained from the University of Melbourne Human Research Ethics Committee, University of South Australia Human Research Ethics Committee, the Aboriginal Health and Medical Research Council of NSW and the North Coast Area Health Service human research ethics committee. Community consent was obtained from the boards of the three participating health services. The results of each child’s pathology results were discussed with parents or carers, and overall summary results were discussed in community focus groups in the Clarence Valley.

Results

The demographic characteristics of 174 children who participated in the fruit and vegetable program are presented in Box 1. Of these, 167 children had an initial health assessment including anthropometry completed at baseline.

Retrospective clinical audits were completed for 167 children whose families received at least one box of subsidised fruits and vegetables. Seven children did not have clinical audits: three whose families moved from the area, and four whose families were withdrawn from the program for non-compliance with initial assessments.

After 12 months, 143 children had follow-up health assessments. Of those who did not complete follow-up assessments, nine were from families who moved from the area, nine failed to attend appointments and 13 were from families who dropped out of the program. The median period between baseline and follow-up health assessments was 370 days (interquartile range, 354–407 days). In the Clarence and Nambucca Valleys combined, 30 of 43 families collected 75% or more of the fruit and vegetable boxes available to them over the 12 months. These data were not available for Coffs Harbour.

Anthropometric changes

At the initial assessment of 134 children aged 2–17 years, 4.5% (6) were underweight, 67.2% (90) were normal weight, 14.9% (20) were overweight and 13.4% (18) were obese. Of 125 children aged 2–17 years who were reassessed after 12 months, 4.0% (5) were underweight, 66.4% (83) were normal weight, 16.8% (21) were overweight and 12.8% (16) were obese. There were no significant differences in the proportion of children in each weight category after the fruit and vegetable program compared with baseline (χ2[3,125] = 1.33; P = 0.721). There was also no significant change in the mean percentage body fat after 12 months on the program compared with baseline (22.5% versus 22.1%) among the subgroup of 22 children aged ≥ 7 years who had this assessed.

Health outcomes

The unadjusted data from clinical audits for the overall sample showed that during program participation the mean annual numbers of visits to any health service for illness, hospital emergency department attendances and oral antibiotic prescriptions were significantly lower (P = 0.037, P = 0.017, P = 0.001, respectively) (Box 2). There was also a non-significant reduction in episodes of pyoderma during program participation (P = 0.093). After adjustment for sex, age and community, only the reductions in illness-related health service or hospital visits and in prescribing of oral antibiotics remained statistically significant (Box 3). An additional adjustment of change scores for the baseline values in the covariate-adjusted models yielded no differences in the conclusions drawn other than a loss of statistical significance for the observed reduction in illness-related visits ( 0.5; 95% CI, 1.0 to 0.03).

Changes in haemoglobin and
iron status

A small, non-significant increase of 1.5 g/L (P = 0.076) in the mean haemoglobin level was shown; this effect increased in magnitude to 3.1 g/L and was statistically significant after adjustment for community, sex and age (Box 4). An additional analysis adjusting for baseline haemoglobin level did not change this conclusion. Comparing the individual communities, a large, statistically significant increase in mean haemoglobin levelwas shown at Bowraville (7.8 g/L) but not in Coffs Harbour or the Clarence Valley (P < 0.001 for difference between communities). The proportion of participants with anaemia decreased by 3% compared with baseline (Box 4). Iron deficiency, based on serum ferritin, was common at baseline (41%). There were small decreases in the proportion of fruit and vegetable program participants with low ferritin and iron levels; however, there were no significant differences in mean serum ferritin and serum iron levels after the fruit and vegetable program compared with baseline with or without adjustment for community, sex and age (Box 4). Additional adjustment for baseline iron and ferritin levels did not change these findings.

Discussion

Aboriginal children from the NSW north coast who participated in this fruit and vegetable subsidy program had significantly fewer oral antibiotic prescriptions over 12 months compared with the preceding year. The proportion of overweight or obese children after participation in this program did not change. Although height, weight and BMI had all increased significantly at the 12-month follow-up as expected in children, there was no change in the percentage body fat among a subgroup who had this assessed. The prevalence of iron deficiency at baseline was 41%, with anaemia in 8%. There was a small but statistically significant increase in the mean haemoglobin level and a reduction in the proportion of children with anaemia, but only a non-significant 4% decrease in iron deficiency.

Our study demonstrates the potential to undertake evaluation studies in an Aboriginal community-controlled health service, despite the inherent limitations in a busy community-oriented service organisation. It is also an example of an Aboriginal community-directed program, which are far more common than intervention research, although few are documented in academic literature.

The nutritional challenges in this group of disadvantaged Aboriginal children are consistent with those reported in a study of other towns in northern NSW.7 Low intakes of fruits and vegetables and high intakes of energy-dense, nutrient-poor foods were reported among both Aboriginal and Torres Strait Islander and non-Indigenous children aged 9–13 years, with a particularly high intake of sodium, calories, fat, sugary drinks and white bread by Indigenous boys.7 Although the nature of the intervention in our study differed from other nutrition interventions in remote Aboriginal communities, such as the Looma Healthy Lifestyle Program8 in Western Australia and the Minjilang Health and Nutrition Project9 in the Northern Territory, a common feature of these successful programs was strong community engagement. This, together with ongoing relationships, underpins other current Aboriginal community research programs.10,11

Community support for our healthy food program was fostered by the 88% subsidy for fruits and vegetables. Lower subsidies of 10%–20% have been used in other recent healthy food research and modelling studies.1214 The higher subsidy used in this program reflects the substantial challenges and barriers to healthy nutrition faced by disadvantaged Aboriginal and Torres Strait Islander families. However, it is consistent with the WIC program (Special Supplemental Food Program for Women, Infants, and Children) in the United States and the Healthy Start program in the United Kingdom, which provide free healthy foods to low-income pregnant women and young children. The WIC program, in particular, has been shown to improve the nutritional status of participating women and children and pregnancy outcomes.1519 There are still questions about the cost-effectiveness of these healthy food subsidy programs and whether the impacts on nutritional status are sustained.15,20,21 Food subsidies remain topical in Australia, given increasing concerns about food insecurity22 and as a policy alternative to compulsory income management and cash entitlements for low-income families.

The before-and-after uncontrolled study design limits the strength of our data. Regression to the mean due to paired data and the normal reduction in rates of childhood illnesses in older children may have also contributed to the findings.23 Regression to the mean was accounted for through use of all-covariate adjusted models that included age, sex and community, in addition to the baseline value for each outcome analysed. It is also possible that other unrelated environmental factors contributed to the improvements in nutrition and health outcomes, such as local early childhood and school nutrition programs.24,25 In addition, the health record audits may be subject to incomplete ascertainment, due to the ability of patients to potentially access more than one primary health care service and the lack of linkage of hospital records across area health services. It is not possible to predict the impact of this on the findings; however, it is likely to have had a similar impact before and after participation.

We showed an association between subsidised fruits and vegetables and short-term health improvements in this study. We have previously reported increased plasma biomarkers of fruit and vegetable intake among participants,4 which supports the hypothesis that improvements in dietary intake contributed to improved health outcomes. A controlled study is needed for further confirmation of these findings and to allow investigation of the cost-effectiveness of such a program. Our findings are consistent with prospective studies demonstrating an association between healthy nutrition and improved long-term health outcomes.26,27

A larger trial is warranted to investigate the sustainability and feasibility of healthy food subsidy programs in Australia. The program could be adapted to target low-income families more generally. The design of future healthy food subsidy studies needs to allow us to distinguish between the relative contribution of fruit and vegetables and comprehensive primary health care to the improved outcomes. This program aimed to engage families in preventive health activities more fully than previously, which may also have contibuted to the observed health outcomes. This is relevant, given the cost of food subsidies and the need to target effective interventions. Food subsidy programs in the US operate independently of health services, although the WIC program assists participants to access health and social services.28

This fruit and vegetable subsidy program was associated with improvements in some indicators of short-term health status among disadvantaged Aboriginal children. These health outcomes and the associated improvements in biomarkers of fruit and vegetable intake4 have the potential to reduce health disparities in the population.

1 Baseline demographic characteristics of participating children, in total and by community

All communities

Clarence

Coffs Harbour

Nambucca


No. of families

55

30

12

13

No. of children

174

90

36

48

No. of boys

82

46

18

18

Age in years, mean (SD)

7.6 (4.2)

7.5 (3.8)

11.0 (3.3)

5.8 (4.3)

Children with at least one smoker in household*

107/164

62/90

18/36

27/38

Families receiving unemployment benefits, pensions, no./total

51/55

28/30

10/12

13/13


* Proportion of participants with a valid response to the number of smokers in the household.

2 Retrospective clinical audit data for health outcomes among participants for the 12 months before and 12 months after starting the subsidised fruit and vegetable program (n = 167)*


* Error bars show 95% CI. Illness-related visits to health services. Preventive health-related visits to health services. § Number of prescriptions.

3 Change in health outcomes among Aboriginal children participating in the subsidised fruit and vegetable program (n = 167)

Sick
visits*

Well
visits

Otitis media
episodes

Pyoderma
episodes

Hospital
attendances

Oral
antibiotics

Topical
antibiotics


Unadjusted mean Δ-score§ (95% CI)

0.6 
( 1.1 to 0.04)**

0.1 
( 0.3 to 0.03)

0.1 
( 0.2 to 0.06)

0.2 
( 0.4 to 0.03)

0.3 
( 0.5 to 0.05)

0.5 
( 0.8 to 0.2)**

0.06 
( 0.2 to 0.1)

Adjusted mean Δ-score
(95% CI)

0.6 
( 1.2 to 0.001)**

0.2 
( 0.3 to 0.01)

0.1 
( 0.2 to 0.06)

0.2 
( 0.4 to 0.05)

0.2 
( 0.4 to 0.1)

0.5 
( 0.8 to 0.2)**

0.1 
( 0.2 to 0.1)


* Illness-related visits to health services. Preventive health-related visits to health services. Number of prescriptions. § (Number of episodes per year during 12 months’ participation) (number of episodes in the year before program participation). Adjusted for sex, age and community. ** Significantly different to zero (P < 0.05).

4 Changes in haemoglobin and iron status among fruit and vegetable program participants (n = 129)

Mean level (SD)


Δ-score (95% CI)


Proportion classified as low


Before

After

Unadjusted mean

Adjusted mean*

Before (no. [%])

After (no. [%])

P


Haemoglobin (g/L)

126.8 (12.3)

128.2 (10.5)

1.5 ( 0.2 to 3.1)

3.1 (1.4 to 4.8)**

12/150 (8%)

7/137 (5%)

0.453

Ferritin (μg/L)§

33.3 (24.2)

35.2 (22.5)

3.2 ( 0.5 to 6.2)

1.7 ( 2.5 to 6.0)

63/152 (41%)

51/139 (37%)

0.440

Iron (μmol/L)

12.7 (6.0)

13.2 (5.3)

0.5 ( 0.6 to 1.6)

0.8 ( 0.5 to 2.0)

43/152 (28%)

32/139 (23%)

0.405


* Adjusted for sex, age and community. 129 participants had valid haemoglobin, ferritin and iron at baseline and follow-up. Additional participants had valid pathology at either baseline or follow-up as shown. Reference interval (RI): ≥ 5 years, 115–140 g/L; < 5 years, 105–140 g/L. § RI: boys, 20–200 μg/L; girls, 29–200 μg/L. RI, 11–28 μmol/L. ** Significantly different to zero (P < 0.05).

Vitamin B12 and folate tests: interpret with care

Clinicians need to consider analytical issues when requesting and interpreting these tests

Vitamin B12 and folate tests are useful for identifying patients with a deficiency. In this issue of the Journal, Willis and colleagues highlight some of
the limitations of serum vitamin B12 assays.1 They also emphasise the uncertainty regarding whether red-cell or serum folate should be the preferred first-line test for folate status. The issues underlying some of the data presented require elaboration.

Vitamin B12 assays have an interpretative grey zone in the region of low-normal and mildly low results. Outside the grey zone, the tests show good performance characteristics: a cut-off of 221 pmol/L has a sensitivity of 99%2 and a cut-off of 123 pmol/L has a specificity of 95%.3 Optimal decision points may vary between methods, but results above 220 pmol/L generally rule out deficiency, while results below 125 pmol/L “rule in” deficiency. Between these limits, misclassification may occur if results are interpreted in a binary manner as simply above or below the lower limit of normal (typically about 150 pmol/L).
One study used a binary interpretative approach in a cohort of patients with low-normal or low vitamin B12 concentrations (< 221 pmol/L).4 It is the results of this study that have led to the claims of the extraordinary misclassification rate of 50%, quoted by Willis et al. In contrast, the appropriate response to vitamin B12 results
of 125–220 pmol/L in patients clinically suspected of deficiency is further testing — using, for example, metabolic markers.

Clinicians also need to be alert to interference in vitamin B12 assays from intrinsic factor antibodies. The interference is sporadic and the same sample may give false results in one assay but not another. An investigation of 23 samples from patients with clinically overt pernicious anaemia (15 positive and eight negative for intrinsic factor antibodies) showed false-normal results for five to eight of the samples, depending on the assay used.5 It is from this report of highly selected samples that claims of “assay failure rates of 22% to 35%” are made.1 This overstates the frequency of the error among non-selected requests to laboratories, which has been estimated as closer to 1 in 3000 requests.6 The interference appears to be limited
to samples from overtly deficient patients and therefore needs to be considered when vitamin B12 results are normal or high in patients with clinically evident deficiency.

In most clinical contexts, assessment of folate status is valid with either red-cell or serum folate. Red-cell folate has the theoretical advantage of providing a longer-term assessment of folate status. This is offset in practice by additional variation in red-cell folate measurements due to sample pretreatment factors and the binding of folate to deoxyhaemoglobin — factors that do not influence serum folate results. A systematic review recently assessed the performance of red-cell versus serum folate for identifying deficiency.7 It found that neither was clearly superior, although serum folate more frequently showed higher correlation with homocysteine as a functional marker
of deficiency.

Tests for folate status remain relevant among populations in countries where staple foods are fortified with folate. The introduction of mandatory fortification
of wheat flour used for breadmaking has reduced the prevalence of deficiency in Australia. In most patients with macrocytic anaemia it may therefore be appropriate to use folate as a second-line investigation after more common causes have been excluded. However, deficiency may still be seen, particularly in those not regularly consuming bread or other grain-based products, such as those with coeliac disease or alcohol dependence.

Vitamin B12 and folate assays are widely available and inexpensive investigations. They identify vitamin deficiencies that have serious consequences if untreated. Recognition of the grey zone in vitamin B12 interpretation limits misclassification; however, clinicians must also remain alert to sporadic assay interference in overtly deficient patients. Folate status may be assessed with either red-cell or serum folate. As the faster and less expensive test to perform, serum folate appears to offer
the best combination of test cost and clinical information.

Salicylate elimination diets in children: is food restriction supported by the evidence?

When a food is identified as causing allergic symptoms, that food will usually be removed from the diet. However, inappropriate use of extensive food elimination can be harmful. Salicylate elimination or “low salicylate” diets — which remove foods deemed to contain natural salicylates — can be particularly restrictive, especially as they are often implemented with restriction of other foods such as those containing amines, glutamates, synthetic food additives, gluten and dairy. These diets appear to be commonly used in New South Wales, but to our knowledge are not widely used outside of the state or in other countries. We discuss our own experiences with children who were referred for care to the allergy clinics of three public hospitals, and who had previously used these diets, and review the evidence for using low salicylate diets in treating a variety of disease indications.

For which conditions are low salicylate diets prescribed in Sydney?

We sought to identify the indications for which salicylate elimination is prescribed in Sydney by conducting a retrospective case note review of children attending the allergy clinics of the two main children’s hospitals, Sydney Children’s Hospital and the Children’s Hospital at Westmead, as well as a major regional allergy clinic at Campbelltown Hospital, between 1 January 2003 and 31 December 2011. We confirmed any missing details through a single telephone conversation between an immunologist or allergist and the child’s carer. Approval for the study was obtained from the South Eastern Sydney Local Health District, Human Research Ethics Committee – Northern Sector.

We identified 74 children who had at some point in their lives been on a low natural salicylate diet. The most common indication for initiation of the diet, reported by the patient’s carer, was eczema in 34/74, followed by a behavioural abnormality (eg, attention deficit hyperactivity disorder [ADHD] or unsettled infant behaviour) in 17/74 and gastrointestinal disturbances (eg, abdominal pain or gastro-oesophageal reflux disease) in 12/74 (Box).

What is the evidence supporting the role of low salicylate diets for these indications?

We reviewed the literature using MEDLINE and PubMed, combining search terms “salicylate”, “elimination diet” or “exclusion diet” with “food allergy”, “food intolerance”, “eczema”, “atopic dermatitis”, “chronic urticaria”, “ADHD”, “behaviour” or “gastrointestinal”. We found no evidence in the peer-reviewed literature to suggest a role for salicylates in any of the diseases for which the diet is prescribed.

In the absence of an overt type I hypersensitivity clinical response, food is an uncommon precipitant of eczema. A 2008 Cochrane review concluded that, with the exception of egg exclusion in patients who have positive specific IgE antibodies to egg, there is little evidence to support restriction of tolerated foods in eczema.1

On the other hand, there is good evidence that food exclusion can ameliorate the hyperkinesis symptoms of ADHD, with numerous studies showing a benefit for broad-based food exclusion diets.2 However, a recent randomised controlled trial suggests that much of this effect is caused by artificial food additives, and we were unable to identify any peer-reviewed evidence that natural salicylates can cause hyperactive behaviour.3 One published letter referred to challenge with salicylates precipitating behavioural symptoms, however the authors did not stipulate whether the challenge substance was natural salicylate or acetylsalicylic acid (aspirin)4 — aspirin being known to cause significant symptoms when natural salicylates have no effect.5

Finally, while foods are well known to cause a variety of gastrointestinal symptoms, from coeliac disease to irritable bowel syndrome, there is no good peer-reviewed evidence that natural salicylates cause any gastrointestinal symptoms.

Do salicylate elimination diets cause harm?

Although food elimination diets used to treat allergy have been associated with side effects including micronutrient deficiency,68 protein or energy malnutrition,9 eating disorders,10 food aversion,11 and the development of allergic reactions including fatal anaphylaxis to the excluded food on reintroduction,12,13 we were unable to identify any evidence regarding the safety or otherwise of salicylate elimination diets in children. This is of concern given that many of the patients attending our clinics had started the diets at a young age (median, 24 months; range, 6 weeks to 15 years), and continued for an extended period (> 1 year in 30/61 children).

Among our patients, where details were available, we identified a high occurrence of possible adverse outcomes among children who had been on low salicylate diets, with 31 out of 66 children suffering one or more possible adverse events. Symptoms and problems experienced included weight loss or failure to thrive in 13/66 children, eating disorders (including three cases of anorexia nervosa) in 4/66, specific nutrient deficiency in 2/66 (one case of vitamin C deficiency, one case of protein, iron and zinc deficiency), food aversion in 6/66, alopecia in 2/66 and unplanned weaning in 3/66. Four out of 13 mothers who went on the diet to benefit their breastfeeding infant suffered significant weight loss, which they perceived as problematic.

While we acknowledge that our cohort has an inherent selection bias and that without a control group it is not possible to attribute the reported events to the diet, we are concerned that all adverse events were reported to have occurred after initiation of the diet.

Also, beyond the possible adverse events noted in our patients, we are additionally concerned about the use of broad-based empirical food elimination in early life, with increasing evidence suggesting that food elimination at this time predisposes to the development of food allergy to the excluded foods, particularly among children with eczema, which was the largest group identified here.14,15

Who prescribes salicylate elimination diets?

Among those patients where details were available, 47/69 were prescribed the diet through medical allergy services, with general paediatricians 7/69 and dietitians 7/69 prescribing less frequently, while 8/69 parents obtained the diet from friends or from the internet. We do not prescribe the diets in our practice.

In order to assess whether the diet was more widely used elsewhere, we surveyed overseas allergists. An online survey of members of the editorial boards of major European and North American allergy journals produced 23/125 responses, with none of the responding experts employing the diet for ADHD, and only 1/23 using a form of salicylate exclusion for eczema.

Does the available research support a role for natural salicylates in any disease causation?

As discussed above, there is no peer-reviewed evidence to support the use of low salicylate diets in treating eczema, behavioural symptoms or gastrointestinal symptoms.

One disease where the role of natural salicylates has been studied in more detail is aspirin-sensitive asthma, where doses of natural salicylic acid 10 times higher than the aspirin dose have no effect.5 The lack of importance of natural salicylates in this disease is well established in clinical practice, as reflected by the evidence-based clinical decision support website, UpToDate (http://www.uptodate.com/home), which states that “dietary salicylates do not cause symptoms in NSAID [non-steroidal anti-inflammatory drug] sensitive patients”.16

A second disease where low salicylate diets have been trialled is chronic idiopathic urticaria (CIU); however, while there is some evidence that synthetic food additives may play a role in a small proportion of adult CIU cases,17 the peer-reviewed evidence that salicylates play any role in this disease is largely limited to studies that used aspirin as the challenge substance.18 On the other hand, there are several reasons to question the idea that salicylate-containing foods play any role in CIU. First is the recent discovery that half of childhood CIU is autoimmune in nature, resulting from autoantibodies against the high-affinity IgE receptor.19 Second, evidence suggests that those few foods said to contain salicylates that may precipitate CIU (eg, tomatoes, wine, herbs) probably do so not because of their salicylate content, but because they contain volatile aromatic chemicals (eg, alcohol, ketones and aldehydes).20 Third, there is evidence that the foods removed in low salicylate diets may not actually contain significant levels of salicylates, with one group suggesting that many “high salicylate foods” contain no aspirin and only tiny amounts of natural salicylates.21

Finally, it is important to discuss local research on salicylate intolerance performed in the early-to-mid 1980s. Most of that work focused on CIU, with a lesser focus on a number of other symptom complexes.4,2224 The research involved placing patients on diets that removed foods containing salicylates, using food challenge to identify which constituents were responsible for any perceived improvement.22,24 However, teasing out which component of these broad-based elimination diets were responsible for any perceived benefit is difficult, given that the diets removed many food constituents, including those now known to cause symptoms, such as artificial food additives,3,17,25 and because the challenge substance was commonly aspirin,22,24 although sodium salicylate was said to have been used in some work.22 Moreover, most of the clinical data appeared in a non-peer-reviewed format,22 or with incomplete methodological details in review format in peer-reviewed journals.4,23 These non-peer-reviewed findings of disease associations of natural salicylates have not been reproduced by other investigators, and a recent British textbook of food hypersensitivity concluded “there are no effective diagnostic tests for salicylate intolerance, and no studies showing the efficacy of dietary exclusion”.26

Can salicylate elimination diets be recommended for use in children?

The use of low salicylate diets in children is not supported by current evidence or by expert opinion. There is also no evidence that these diets are safe, in particular for infants and their breastfeeding mothers, and for those at risk of developing eating disorders. While our retrospective case note review is insufficient to prove any risk associated with the diets, it is concerning that harm may occur when children and adolescents are placed on such restrictive diets, particularly if they stay on them for long periods.

We would invite any proponents and prescribers of the diet to produce evidence of the efficacy and safety for the disorders in which they consider such a restrictive diet is indicated. Pending such evidence, we cannot recommend the use of salicylate elimination diets.

Characteristics of 74 children prescribed salicylate elimination diets

Characteristic

No. of children/total*


Age at initiation of diet

Infancy (1 year or less)

26/67 

Early childhood (1–3 years)

22/67 

Childhood (4–10 years)

10/67 

Adolescence (11–18 years)

9/67 

Duration of diet

< 1 month

5/61 

1 month to 6 months

17/61 

> 6 months to 1 year

9/61 

> 1 year

30/61

Indication for diet

Eczema

34/74 

Behaviour (including ADHD)

17/4 

Gastrointestinal complaints

12/74 

Failure to thrive

4/74 

Acute allergic reaction

3/74 

Anaphylactoid reaction

2/74 

Urinary urgency

1/74 

Headache

1/74 

Adverse events

Failure to thrive or weight loss

13/66 

Food aversion

6/66 

Eating disorder

4/66

Infant weaned early

3/66

Alopecia

2/66

Nutrient deficiency

2/66

Constipation

1/66 

Total children with adverse events

31/66 

Breastfeeding mothers with complications of diet

4/13 

Prescribed by

Medical allergy clinics

47/69

Dietitian

7/69 

Paediatrician

7/69 

Friend or internet

8/69 


ADHD = attention deficit hyperactivity disorder. * Varying denominators reflect the completeness of available data. Including three cases of anorexia nervosa.

Better prepared next time: considering nutrition in an emergency response

To the Editor: Cyclones, floods and bushfires are experienced in Australia every year, and Australia’s management of natural disasters centres on prevention, preparedness, response and recovery.1 Although access to safe food is a basic human need, during the 2010–2011 Queensland floods there was minimal information available to guide household food preparedness and food supply to communities.2 To ensure that Queensland is better prepared for future natural disasters, the Queensland Floods Commission of Inquiry recommended the development of consistent community education programs.2 Following the floods, a local food security resource kit3 was developed; however, there were no statewide resources. In 2011, we were members of a multidisciplinary working group — the Food Requirements in Disasters Working Group — that was established by Queensland Health to provide advice on food requirements in disasters for households and community organisations.

There is little international literature on food recommendations in disasters that is specific to high-income countries. Existing Australian resources did not consider nutritional requirements for infants, children and adults, did not provide sufficient advice for appropriate food purchasing in the event of no access to power or water and/or were no longer publicly available.4,5 Twenty-six principles and nutritional criteria (Box) — covering food safety, practical considerations and nutrient requirements — guided the development of recommendations on food requirements during disasters for infants, children and adults.

Five online fact sheets (available at http://www.health.qld.gov.au/disaster/html/prepare-event.asp) outlining the food and equipment required to sustain two people for 7 days (Emergency pantry list for Queensland households) and to support both breastfed and formula-fed infants for 3 days (including Food for infants in emergencies and Preparing ready-to-use infant formula in an emergency) were developed. The recommended types and quantities of foods align with the Australian dietary guidelines and Infant feeding guidelines (available at http://www.eatforhealth.gov.au). To facilitate purchasing choices, tips and examples of product sizes based on items available in major supermarkets are included.

Credible, easily accessible information is essential to ensure households have the capacity to prepare for and respond to disaster situations, to prevent panic buying and food shortages, and to minimise any negative impact on the health and wellbeing of individuals affected by disaster. Queenslanders now have access to a suite of resources to help them stay safe and healthy during natural disasters and severe weather conditions.

Principles and nutritional criteria used to guide recommendations on food requirements during disasters
for infants, children and adults

Principles

  • Nutrient requirements need to be balanced against practicality

  • Provision of adequate energy (kilojoules) and water are key priorities

  • Dietary recommendations set at population level — no individual dietary requirements

  • Requirements per person — should be scalable

  • Food products should be non-perishable

  • No refrigeration required

  • Minimal preparation required

  • No reheating or cooking involved

  • Number of days — should be scalable and informed by practical experience

  • Include generic products rather than specific brands

  • Total weight should be kept to a minimum

  • Foods should be safe

  • Packaging should be robust

  • Packaging should be waterproof and non-porous

  • Packaging should be vermin proof

  • Presume there are no facilities available for food storage — provide appropriate containers and serving sizes

  • Provide other equipment needed for preparation and consumption of food, including hand sanitiser, plastic cutlery
    and plates

  • Wastage should be minimised

  • Costs should be reasonable (no luxury items)

  • Foods should be palatable and acceptable

  • Foods should be readily available, familiar and culturally appropriate

  • Foods should be adaptable to personal tastes

Nutritional criteria

  • Provide mean food and nutrient requirements for adults and children

  • Provide mean food and nutrient requirements for infants
    (≤ 12 months)

  • Provide 100% of requirements (presume that households and isolated people have no other food available)

  • Particularly note upper limit for sodium

Improved iodine status in Tasmanian schoolchildren after fortification of bread: a recipe for national success

Iodine is an essential micronutrient required for thyroid hormone synthesis. Inadequate dietary iodine intake is associated with a spectrum of diseases termed iodine deficiency disorders. The most serious and overt consequences are neurocognitive disorders and endemic goitre.1 Urinary iodine excretion is a marker of recent dietary iodine intake and is typically used to monitor population iodine sufficiency. Population iodine status is considered optimal when median urinary iodine concentration (UIC) is between 100 µg/L and 199 µg/L, with no more than 20% of samples having UIC under 50 µg/L.1

Concern about the emergence of widespread mild iodine deficiency in Australia and New Zealand led to mandatory iodine fortification of yeast-leavened bread in 2009.2 Tasmania has a well documented history of endemic iodine deficiency, with iodine supplementation strategies implemented since the 1950s.3 The use of iodophors as sanitising agents in the dairy industry was thought to have provided protection; however, urinary iodine surveys of Tasmanian school children in 1998 and 2000 showed a recurrence of iodine deficiency.4

In October 2001, the Tasmanian Government introduced a state-based voluntary iodine fortification program as an interim measure to reduce the recurrence of iodine deficiency. This program resulted in a modest but significant improvement in population iodine status.5 The Tasmanian voluntary fortification experience provided valuable information for the development of the Australia and New Zealand mandatory iodine fortification program.

In this article, we describe the results of the 2011 urinary iodine survey of Tasmanian schoolchildren and compare these results to surveys conducted before fortification and during a period of voluntary fortification.

Methods

A cross-sectional urinary iodine survey of Tasmanian schoolchildren was conducted in 2011. Survey methods were comparable to those used during the period of voluntary fortification, as described elsewhere.5

A one-stage cluster sampling method was used to randomly select school classes that included fourth-grade students from all government, Catholic and independent schools in Tasmania (such classes may include children in third, fourth, fifth and sixth grade, as composite class structures are popular in Tasmania). A total of 52 classes (from 49 schools) were invited to participate. This included 42 classes that had been randomly selected for the final survey conducted during the period of voluntary fortification and an additional 10 classes randomly selected in 2011 to boost sample size. In total, 37 classes (from 35 schools) agreed to take part, representing a class participation rate of 71%. Of the 880 children in participating classes, 356 (40%) returned positive consent and 320 (36%) provided a urine sample for analysis. These participation rates are comparable with the rates reported from previous surveys.5

Spot urine samples were collected at home, returned to school and transported by a private pathology provider to a laboratory where they were frozen and stored. Batch analyses were completed by the Institute of Clinical Pathology and Medical Research, Westmead Hospital. UIC was measured using the ammonium persulfate digestion method based on the Sandell–Kolthoff reaction.6

UIC data from children of comparable age from prefortification surveys and from participants in the surveys from the voluntary fortification period were used for comparison with the data from this survey.

Data were analysed using Stata version 11 (StataCorp). Median UIC, interquartile range and the proportion of samples with UIC under 50 µg/L were calculated for each survey. To facilitate comparisons between medians and the proportion of UIC results under 50 µg/L across intervention periods (prefortification, voluntary fortification and mandatory fortification), data were combined from the two prefortification surveys (1998 and 2000) and from the four surveys conducted during the period of voluntary fortification (2003, 2004, 2005 and 2007). Differences in median UIC across intervention periods were compared using Kruskal–Wallis χ2 (corrected for ties) with post-hoc Wilcoxon rank-sum test.

Ethics approval was obtained from the Tasmanian Health and Medical Human Research Ethics Committee and the Department of Education Tasmania. Parent or carer consent was obtained for all participating children.

Results

Of the 320 students participating in the 2011 survey, 158 (49%) were boys, 153 (48%) were girls and nine (3%) were of unknown sex. Participants were aged 8–13 years, with 83% aged 9–10 years. The median UIC in 2011 was 129 µg/L, and 3.4% of samples had a UIC under 50 µg/L.

The median UIC in 2011 was significantly higher than during the period of voluntary fortification (129 µg/L v 108 µg/L; P < 0.001), which in turn was significantly higher than the median UIC from the prefortification period (73 µg/L; P < 0.001) (Box 1). There was a reduction in the proportion of UIC results under 50 µg/L after voluntary fortification compared with prefortification, from 17.7% to 9.6% (P < 0.001), and a further reduction to 3.4% after mandatory fortification (P = 0.001) (Box 2). Box 3 shows the progressive improvement in median UIC results from Tasmanian urinary iodine surveys of schoolchildren over the iodine fortification intervention periods (prefortification, voluntary fortification and mandatory fortification).

Discussion

Our findings show a progressive improvement in the iodine status of Tasmanian schoolchildren over the iodine fortification intervention periods (from prefortification to voluntary fortification and mandatory fortification). This study also shows the specific benefit of a mandatory versus a voluntary approach to iodine supplementation.

Population iodine status is routinely assessed by measuring UIC, whereas determining the appropriate level of fortification in food relies on estimates of dietary intakes. The relationship between dietary iodine intake and UIC is usually linear — an increase in dietary intake results in a comparable increase in urinary excretion.7 The 56 µg/L increase in median UIC from prefortification to mandatory fortification is consistent with the predicted 52 µg/d increase in the mean dietary iodine intake for children aged 9–13 years, estimated by dietary modelling before the introduction of mandatory iodine fortification.8

This is the first study to specifically evaluate the adequacy of iodine nutrition in an Australian population after the introduction of mandatory iodine fortification of bread in 2009. The results are of significance to the Australian population more broadly, as the magnitude of effect of mandatory supplementation on the national population is likely to be similar to that observed in Tasmania.

In the 2004 National Iodine Nutrition Study, a survey of schoolchildren found that Western Australia had the highest median UIC of all Australian jurisdictions, at 142.5 µg/L.9 Extrapolating the magnitude of increase in UIC from our surveys to that observed in WA would result in a UIC just under 200 µg/L (56 µg/L + 142 µg/L), which is at the upper level of the optimal range.1

To facilitate comparisons, the sampling method used in our 2011 survey was modelled on the method used in the surveys conducted during the period of voluntary fortification.5 Classes that included fourth-grade children were originally chosen as the sampling frame to be consistent with World Health Organization guidelines for assessing population iodine status.1 Staff from the Department of Education Tasmania advised that this age group would be sufficiently independent to provide a urine sample, while minimising self-consciousness likely in older children. It is yet to be seen whether the observed impact of mandatory fortification is representative of other population groups, such as adults. Published surveys of prefortification UIC of Melbourne adults offer a useful baseline for this purpose.10 The Australian Health Survey 2011–2013 is measuring UIC in adults and children across Australia, and we anticipate this will provide further evidence of the iodine status in the Australian population.

Comparisons with prefortification surveys should be interpreted with the knowledge that there were subtle differences in sampling methods. A two-stage stratified sampling procedure was adopted in the prefortification period (1998–2000), where schools and then students from within schools were randomly selected. Subsequent surveys used a one-stage cluster sampling method with classes that included fourth-grade students as the sampling frame. These sampling differences are not considered significant and have been discussed elsewhere.5 Any sample bias associated with factors such as socioeconomic status or geographic location is unlikely to affect the results, as an association between UIC and these factors has not been found previously.4

Although the 2011 results are consistent with iodine repletion in the general population, they cannot be generalised to high-risk subgroups such as pregnant and breastfeeding women, whose daily iodine requirements increase by about 40%.11 Prior research in Tasmania has shown persistent iodine deficiency in pregnancy despite the introduction of voluntary iodine fortification.12 Recent evidence suggests that while mandatory iodine fortification may have benefited breastfeeding women, only those consuming iodine-containing supplements had a median UIC in the adequate range.13 Future studies of iodine nutrition should specifically assess the adequacy in these groups. Similarly, ongoing awareness of the recommendation that pregnant and lactating women take 150 µg of supplemental iodine per day should not be overlooked, particularly in those parts of Australia where marginal iodine deficiency has been previously reported.14,15

Changes to the iodine content of food supply (such as the level of iodine in milk or the level of salt in bread) or shifts in dietary choice (such as a preference for staples other than bread) could jeopardise iodine status in the future.3,16 The value of ongoing vigilance in monitoring population iodine status has been highlighted by previous authors.12,17,18 In addition, monitoring iodine levels in the food supply will be required to inform future adjustments to the mandatory iodine fortification program.

1 Urinary iodine concentration (UIC) of Tasmanian schoolchildren by year and intervention period

Intervention period

Year (n)

Median UIC (95% CI)

IQR

Proportion of samples with UIC < 50 µg/L (95% CI)


Prefortification*

1998 (124)

75 µg/L (72–80 µg/L)

60–96 µg/L

16.9% (10.3%–23.6%)

2000 (91)

72 µg/L (67–84 µg/L)

54–103 µg/L

18.7% (10.6%–26.7%)

Voluntary fortification*

2003 (347)

105 µg/L (98–111 µg/L)

72–147 µg/L

10.1% (6.9%–13.3%)

2004 (430)

109 µg/L (103–115 µg/L)

74–159 µg/L

10.0% (7.2%–12.8%)

2005 (401)

105 µg/L (98–118 µg/L)

72–155 µg/L

10.5% (7.5%–13.5%)

2007 (304)

111 µg/L (99–125 µg/L)

75–167 µg/L

7.2% (4.3%–10.1%)

Mandatory fortification

2011 (320)

129 µg/L (118–139 µg/L)

95–179µg/L

3.4% (1.4%–5.4%)


IQR = interquartile range. * Based on 1998–2005 surveys.5

2 Comparison of urinary iodine concentration (UIC) of Tasmanian schoolchildren across intervention periods

Fortification intervention period (n)

Median UIC (95% CI)

Difference from prefortification period

P* compared with results from prefortification period

P* compared with results from
voluntary
fortification period

Proportion of
samples with UIC < 50 µg/L
(95% CI)

Odds ratio (P)
compared with results from
prefortification period

Odds ratio (P)
compared with results from voluntary
fortification period


Prefortification (215)

73 µg/L (70–79 µg/L)

17.7% (12.6%–23.8%)

1

Voluntary fortification (1482)

108 µg/L (102–111 µg/L)

+ 35 µg/L

< 0.001

9.6% (8.1%–11.1%)

0.49 (< 0.001)

1

Mandatory fortification (320)

129 µg/L (118–139 µg/L)

+ 56 µg/L

< 0.001

< 0.001

3.4% (1.4%–5.4%)

0.17 (< 0.001)

0.34 (0.001)


* Difference in medians compared using Kruskal–Wallis χ2 (corrected for ties) with post-hoc Wilcoxon rank-sum test. Difference in proportion of samples with UIC < 50 µg/L estimated by logistic regression.

3 Median urinary iodine concentration (UIC) of Tasmanian schoolchildren from 1998 to 2011

Characteristics of the community-level diet of Aboriginal people in remote northern Australia

Dietary improvement for Indigenous Australians is a priority strategy for reducing the health gap between Indigenous and non-Indigenous Australians.1 Poor-quality diet among the Indigenous population is a significant risk factor for three of the major causes of premature death — cardiovascular disease, cancer and type 2 diabetes.2 The 26% of Indigenous Australians living in remote areas experience 40% of the health gap of Indigenous Australians overall.3 Much of this burden of disease is due to extremely poor nutrition throughout life.4

Comprehensive dietary data for Indigenous Australians are not available from national nutrition surveys or any other source. Previous reports on purchased food in remote Aboriginal communities are either dated,5 limited to the primary store5,6 and/or short-term or cross-sectional in design.7,8 These studies have consistently reported low intake of fruit and vegetables, high intake of refined cereals and sugars, excessive sodium intake, and limited availability of several key micronutrients.

The aim of this study was to examine characteristics of the community-level diet in remote communities in the Northern Territory over a 12-month period.

Methods

We examined purchased food in three remote communities in relation to:

  • food expenditure;

  • estimated per capita intake;

  • nutrient profile (macronutrient contribution to energy) and nutrient density (nutrient per 1000 kJ) relative to requirements; and

  • major nutrient sources.

We collected information on community size, remoteness and availability of food in each community as well as community dietary data including all available foods with the exception of traditional foods and foods sourced externally to the community. Alcohol was prohibited in the three study communities at the time of our study.

Monthly electronic food (and non-alcoholic beverage) transaction data were provided by the community-owned store and independent stores in the three communities for July 2010 to June 2011. Food order data were collected from food suppliers for all food services in each of the three communities. All food and beverage items with their accompanying universal product code or store-derived product code, quantity sold, and dollar value (retail price) were imported to a purpose-designed Microsoft Access database9 and linked to the Food Standards Australia New Zealand Australian Food and Nutrient survey specific (AUSNUT 1999 and AUSNUT 200710) and reference (NUTTAB 06) databases (NUTTAB 06 has now been replaced by NUTTAB 2010). Folate dietary equivalent levels per 100 g were modified for bread and flour to equal NUTTAB 2010 levels since mandatory fortification was introduced. Unit weights were derived for all food and drink items and multiplied by the quantity sold to give a total item weight. Food items were categorised into food groups derived from the Australian Food and Nutrient Database AUSNUT 07 food grouping system10 and beverages were further categorised to provide a greater level of detail (Appendix 1). Several nutrient compositions for items not available in these databases were derived from the product’s nutrition information panel, which is mandatory on all packaged foods in Australia, or from standard recipes. Nutrient availability was derived for 21 nutrients. Energy and nutrient content per 100 g edible portion was multiplied by the edible weight (primarily sourced from Australian Food and Nutrient data10) of each of the food and beverage items (adjusted for specific gravity to convert mL to g weight) to derive total energy and nutrient content for each food group.

Completeness of data and accuracy were ensured by: a check on monthly time periods reported, follow-up with providers where a food description or unit weight was not available or where a discrepancy was noted; checking of unit weights against unit dollar value; and a second person checking the matching of foods with nutrient composition data and assigning of food groups.

Data analysis

Data were grouped by community, food source, month and food group and transferred to Stata 10 (StataCorp) for analysis. Data for all food sources were combined (community food supply) and the average monthly and per capita daily weight and dollar value of each food group were calculated. Mean monthly and daily food weights were assumed to approximate mean monthly and daily dietary intakes for the data period.

The populations of each of the three remote communities and the three communities combined were estimated based on the total amount of energy provided through the community-level diet, and, assuming energy balance, were divided by the estimated weighted per capita energy requirement for each of the communities and the three communities combined. The estimated total population was verified against Australian Bureau of Statistics (ABS) estimates.11 The weighted per capita energy requirement was determined for each community using the estimated energy requirement for each age group and sex, as stated in the Nutrient Reference Values for Australia and New Zealand12 (with a physical activity factor of 1.6 [National Health and Medical Research Council — light activity13]) in conjunction with the population age and sex distribution as determined by the 2006 ABS population census for each of these three communities.

Nutrient density was calculated for each nutrient by dividing the total nutrient weight by the energy value of the community food supply. Population-weighted nutrient density requirements were derived using estimated average requirements (EARs).12 The EAR for nutrients is stated as a daily average and varies by age and sex. EARs are estimated to meet the requirements of half the healthy individuals of a particular age group and sex and are used to assess the prevalence of inadequate intakes at a population level.12 A nutrient density level below the weighted EAR per 1000 kJ was considered insufficient in meeting the population’s requirements.

Adequate intake (AI) values were used for nutrients for which no EAR was available (potassium, dietary fibre and vitamin E α-tocopherol equivalents). The midpoint of the AI range for sodium was used. Macronutrient profiles (the proportions of dietary energy from protein, total fat, saturated fat, carbohydrate and total sugar) were compared with acceptable macronutrient distribution ranges.14 Major food sources were defined as foods contributing 10% or more of a specific nutrient.

Ethics approval was provided by the Human Research Ethics Committee of Menzies School of Health Research and the Northern Territory Department of Health and the Central Australian Human Research Ethics Committee. Written informed consent was gained from all participating communities, food businesses and food services.

Results

The estimated total population was 2644. Community populations ranged in estimated size from 163 to 2286 residents of mostly Aboriginal ethnicity and were comparable with regard to age and sex distributions.15 The distance from each community to the nearest food wholesaler ranged from 130 km to 520 km. Variation between the communities in remoteness, size, and number of food outlets is shown in Box 1.

Expenditure patterns

Average per capita monthly spending on food and non-alcoholic beverages in communities A, B and C, respectively, was $394 (SD, $31), $418 (SD, $82) and $379 (SD, $80). About one-quarter of all money spent on food and beverages was on beverages (combined communities, 24.8%; SD, 1.4%), with soft drinks contributing 11.6%–16.1% to sales across the three communities (combined communities, 15.6%; SD 1.2%) (Appendix 2). This compares to less than 10% in total spent on fruit and vegetables in each of the three communities (7.3%, 9.1% and 8.9%; combined communities, 2.2% [SD, 0.2%] on fruit and 5.4% [SD, 0.4%] on vegetables) (Appendix 2).

Per capita daily intake

Based on population estimates, there appeared to be differences in the daily per capita volume of many food groups between community A compared with communities B and C and less notable differences between communities B and C (Appendix 3).

On average, per capita daily intake of beverages (including purchased water and liquid tea) was 1464 g (SD, 130.5 g) with sugar-sweetened soft drinks comprising 298–497 g across communities (Appendix 3). Liquid tea constituted most of the remaining beverage volume. Daily per capita fruit and vegetable intake in community A (122 g) was just over half that of communities B (222 g) and C (247 g) (Appendix 3).

Macronutrient profile

For community A, the proportion of dietary energy as carbohydrate was at the higher end of the recommended range; for communities B and C it was within the recommended range. Sugars contributed 25.7%–34.3% of the total proportion of dietary energy across the three communities (Box 2), 71% of which was table sugar and sugar-sweetened beverages. The proportion of dietary energy from fat was within the acceptable range for each community, and lower in community A compared with communities B and C. The proportion of dietary energy as saturated fat was within the recommended range for community A and higher than recommended for communities B and C. The proportion of dietary energy as protein was lower than the recommended minimum in all three communities (Box 2).

Micronutrient density

With reference to weighted EARs (or AIs) per 1000 kJ and nutrients measured, in all three communities the diet was insufficient in calcium, magnesium, potassium and fibre (Box 3). Iron, vitamin C and folate equivalents were all around double the weighted EAR per 1000 kJ and niacin equivalents were nearly four times the EAR (Box 3). Sodium was the nutrient provided in the greatest excess, at nearly six times the midpoint of the average intake range (Box 3). Most nutrient density values appeared lower in community A compared with communities B and C (Appendix 4).

Major nutrient sources

In all three communities, white bread fortified with fibre and a range of micronutrients was a major source of protein, fibre, iron, sodium, calcium, dietary folate, potassium, magnesium and B-group vitamins (Appendix 5). Sugar and sugar-sweetened beverages provided 65%–72% of total sugars (Appendix 5). Bread, salt and baking powder were major sources of sodium in all three communities. Major food sources of all nutrients were similar across the three communities (Appendix 5).

Discussion

Our comprehensive assessment of the community diet averaged over a 12-month period showed a high intake of refined cereals and added sugars, low levels of fruit, vegetables and protein, limiting key micronutrients, and excessive sodium intake. Our findings confirm recent and past reports of dietary quality in remote Aboriginal communities.5,8 We report food expenditure and dietary patterns that are similar to those reported previously using store sales data alone,5,6,8 as are the limiting nutrients (protein, potassium, magnesium, calcium and fibre).8

A striking finding from our study is the high expenditure on beverages and corresponding high intake of sugar-sweetened beverages coupled with low expenditure (and low intakes) of fruit and vegetables.

The level of sugar-sweetened soft drinks reported for communities B and C is in line with what we have previously reported for 10 NT communities from store data alone.6 The apparently substantially higher per capita volume reported for community A warrants further investigation, which could include examining variation in regional consumption, food delivery systems and food outlets. Similarly high per capita consumption of sugar-sweetened beverages has been reported among Aboriginal and Torres Strait Islander children in regional New South Wales (boys, 457 g/day; girls, 431 g/day) and for children at the national level (364.7 g/day).18,19 The high volume of tea purchased is also of concern, as tea is generally consumed as a sugar-sweetened beverage.

The low daily fruit and vegetable intake reported for the three study communities (which on average equated to 0.3 to 0.7 serves of fruit and 1.1 to 2.1 serves of vegetables) is in range with the reported average of 0.4 serves of fruit and 0.9 serves of vegetables per person per day sold through 10 NT community stores in 2009,6 but lower than intakes self-reported among other Aboriginal populations in remote Queensland and regional NSW.18,20,21 Our estimates do suggest improved intakes compared with the low levels of fruit and vegetable intake reported nearly three decades earlier for six remote NT communities.5 Caution needs to be applied in making comparisons with past studies owing to use of different methodologies. It has been estimated that increasing fruit and vegetable consumption to up to 600 g per day could reduce the global burden of ischaemic heart disease and stroke by 31% and 19%, respectively.22 The benefits for the Indigenous population are likely to be much greater, considering their currently low intake of fruit and vegetables and high burden of disease.

A further disturbing aspect of the diet is that fibre-modified and fortified white bread is providing a large proportion of key nutrients, including protein, folate, iron, calcium and magnesium, and unacceptably high levels of sodium. Similarly, among Aboriginal and Torres Strait Islander children in regional NSW, bread was also reported to be a major dietary source of energy, salt and fibre.18 It is alarming that white bread is providing a large percentage of dietary protein when it is a poor protein source. Considering the high-quality protein foods traditionally consumed by Aboriginal Australians,23 this apparent shift to a low-protein and high-carbohydrate diet needs investigation. Traditional foods, such as fish and other seafood, eggs and meat provide high-quality protein, but are unlikely to be significant at the population level if not accessed frequently and by a substantial proportion of the population.

The extremely high rates of preventable chronic disease experienced among Aboriginal people in remote Australia and the high intake of sugar-sweetened beverages, unacceptably low levels of fruit and vegetables, and limiting essential nutrients, provide a compelling rationale that more needs to be done to improve diet and nutrition. Poverty is a key driver of food choice2426 and although most Indigenous people living in remote communities are in the low income bracket, a standard basket of food costs, on average, 45% more in remote NT communities than in the NT capital.27 People in the study communities spend more on food ($379 to $418 per person per month) compared with the expenditure estimated for other Australians ($314 per person per month with 2.6 persons per household).28 Our study provides the only available estimate of remote community food and drink expenditure that we know of. Household expenditure data are not available for very remote Australia, representing a gap in information on food affordability, a major determinant of health.

Our study highlighted some important differences in dietary quality between the study communities, with the dietary profile for community A being generally poorer. This may be indicative of intercommunity or regional differences, such as community size, number of food outlets, location and remoteness, access to food outlets, level of subsistence procurement and use of traditional foods, climate, housing or water quality, and warrants broader investigation.

As with individual-level dietary assessment, there are limitations in estimating community-level dietary intake. An inherent issue in community-level per capita measures in research is the difficulty of determining the population for the study period, so caution is required in using the values presented here; however, the total population (2644) was verified against ABS predicted estimates for the 2011 Australian remote Indigenous population (2638) and was within 4% of the later released ABS census data collected in 2010 for the three study communities (2535). Further, monthly per capita dietary intake estimations were averaged over a 12-month period and are likely to take into account the fluctuations in population that occur in remote communities seasonally and over time. A strength of our study is that expenditure patterns based on proportional spending, macronutrient profile and nutrient density provide an assessment of dietary quality that are entirely independent of population size estimates. Furthermore, as dietary data are derived from food sales records rather than self-reported data, they provide an objective assessment of diet quality. Limitations in using food sales data as a measure of dietary intake have been reported previously.8 Estimated per capita energy intakes for communities A and B differed by less than 10% from per capita requirements derived from 2010 ABS census population figures, indicating completeness in food sale data. Estimated energy intakes for community C were lower than required but 81% of per capita requirements.

Reports on dietary quality are also limited by the accuracy of food composition databases. For example, the range of nutrients presented for each food in the Australian food composition database varies depending on the analytical data available. Nutrient levels reported in this study are based on currently available nutrient composition data.29

A limitation in assessing the nutritional quality of the community-level diet using purchased food data is the exclusion of traditional food intake. It is assumed that traditional food contributes minimally to community-level dietary intake, as not all families have access to traditional foods and procurement usually does not occur on a regular basis. However, the contribution of traditional food to dietary intake has not been investigated. We recognise it would be important in future studies to quantify the contribution of traditional foods to total food intake. The low expenditure on (and therefore low intake of) high-quality protein foods suggests that either these foods are not affordable, or that possibly these foods are accessed through subsistence procurement. However, mean daily energy intake estimates based on 2010 census data indicate that the great majority of energy required is provided through the imported food supply.

Despite these limitations, this study provides an objective, contemporary and comprehensive assessment of the community-level diet in three remote Indigenous communities without the inherent limitations of individual-level dietary intake assessment. It provides evidence on key areas of concern for dietary improvement in remote Aboriginal communities.

Very poor dietary quality has continued to be a characteristic of community nutrition profiles in remote Indigenous communities in Australia for at least three decades. Significant proportions of a number of key micronutrients are provided as fortification in a diet derived predominantly from otherwise poor-quality, highly processed foods. Ongoing monitoring (through use of food sales data) of community-level diet is needed to better inform community and wider level policy and strategy development and implementation. Low income is undoubtedly a key driver of diet quality. Further evidence regarding the impact of the cost of food on food purchasing in this context is urgently needed and the long-term cost benefit of dietary improvement needs to be considered.

1 Community characteristics

Population, and age and/or
sex distribution*


Community

2006

2010

Estimated population

Distance from food wholesaler; location

Access

Food stores

Food services


A

1697 
(49% male;
703 residents < 18 yrs)

2124 
(50% male)

2286

> 500 km; island in Top End region

Regular daily flight

Community-owned store. Two independent stores

Aged care meals, child care, school canteen, school lunch program, breakfast program

B

250 
(49% male;
94 residents <18 yrs)

210 
(49% male)

202

> 400 km; central desert region

Sealed and unsealed road

Community-owned store

Aged care meals,
school lunch program,
child care

C

217 
(43% male;
73 residents <18 yrs)

201 
(49% male)

163

< 150 km; central desert region

Sealed and unsealed road

Community-owned store

Aged care meals, child care,
school lunch program,
breakfast program


* Based on Australian Bureau of Statistics (ABS) census data.11,15 2644 was derived for the total study population based on the total energy available in the purchased food supply
and the weighted per capita energy requirement based on the total population age and sex distribution. This population size was used for analyses where data for all communities were combined rather than the total of 2651. All three communities are classified by the ABS Australian Standard Geographical Classification (http://www.health.gov.au/internet/otd/publishing.nsf/Content/locator) as RA5 (very remote). ◆

2 Estimated energy availability and macronutrient profile, overall and by community

Energy intake

Community A

Community B

Community C

All communities


Estimated per capita energy intake based on 2010 census population (kJ)

9845

9119

7623

9608

Estimated per capita energy intake, based on estimated energy requirement* (kJ [SD])

9147 (927)

9480 (1644)

9400 (1740)

9212 (856)

Macronutrient distribution as a proportion of dietary energy (% [SD])

Recommended range14

Protein

12.5% (0.3)

14.1% (0.8)

13.4% (0.6)

12.7% (0.3)

15%–25%

Fat

24.5% (0.6)

31.6% (1.5)

33.5% (1.1)

25.7% (0.6)

20%–35%

Saturated fat

9.4% (0.3)

11.6% (0.6)

12.1% (0.3)

9.7% (0.3)

< 10%

Carbohydrate

62.1% (0.8)

53.3% (1.8)

52.1% (1.1)

60.7% (0.8)

45%–65%

Sugars

34.3% (0.8)

28.9% (2.2)

25.7% (1.8)

33.4% (0.7)

< 10%


* Estimated energy requirements were calculated by age group (1–3 years; 4–8 years; 9–13 years; 14–18 years; 19–30 years; 31–50 years; 51–70 years; > 70 years) and sex based on Nutrient Reference Values for Australia and New Zealand, tables 1–3.11 For age 19 to > 70 years, the midpoint height and weight of each adult age group was used. For < 18 years, the midpoint of the estimated energy requirement range across each age and sex category was used. Energy expenditure was estimated at 1.6 basal metabolic rate overall. We estimated 8% of women aged 14–50 years were pregnant and 8% were breastfeeding, based on Australian Bureau of Statistics 2006 births data, table 9.216 and 2006 census data for women aged 13–54 years.15 Recommendation for ‘‘free sugars’’ — all monosaccharides and disaccharides added to foods by the manufacturer, cook or consumer, plus sugars naturally present in honey, syrups and fruit juices.17 

3 Nutrient per 1000 kJ as a percentage of weighted estimated average requirement (EAR) per 1000 kJ,* overall and by community

* Adequate intake values were used for nutrients for which no EAR was available (potassium, dietary fibre, vitamin E α-tocopherol equivalents, sodium).

Diet and nutrition: the folly of the reductionist approach

Diet-related health problems require us to change our food choices rather than emphasise individual nutrients

After almost 4 years of review, in February 2013 the National Health and Medical Research Council released the latest revision of its dietary guidelines for Australians.1 Recognising that people consume foods rather than single nutrients, the new guidelines feature food-based advice, emphasising dietary patterns that are associated with health and wellbeing and are relevant to reducing the risks of obesity and chronic disease.

While researchers use a reductionist approach in analysing the adequacy of selected nutrients or nutrient density, people do not shop for protein, “omega 3s”, “carbs”, calcium or some other nutrient, but for whole foods; and our advice needs to be about which to choose more of (fruit, vegetables, wholegrains, legumes, nuts and fish) and which to limit (sweetened drinks and processed foods high in saturated fat, added sugars or salt).

Emphasising one or more particular nutrients can lead to poor food choices. For example, on a nutrient basis, a processed breakfast cereal may contain added vitamins, but the cereal may be high in added sugars or salt. In this issue of the Journal, research by Brimblecombe and colleagues shows that Indigenous Australians in remote communities consume poor-quality processed foods fortified with some nutrients rather than nutrient-rich minimally processed foods.2

There is also accumulating evidence that the source of nutrients matters. Calcium is an important factor in bone health, but there may be large differences in health outcomes associated with calcium supplied by foods and those associated with taking calcium supplements. New, and admittedly controversial, studies examining consumption of calcium supplements report increased rates of cardiovascular events, especially when added to an adequate calcium intake from foods,35 whereas dietary calcium was unrelated to adverse events. Other studies have shown a slightly increased risk of kidney stones with supplemental but not dietary calcium.6 However, it has been suggested that calcium from foods does not pose the same risk because it is consumed in small quantities throughout the day rather than as a single supplementary dose.7

The complexity of fruits and vegetables provides further incentive to avoid a reductionist approach. Evidence cited in the Australian Dietary Guidelines shows that an adequate intake of fruits and vegetables reduces the risk of cardiovascular disease,1 but the reasons remain unclear. In contrast, a meta-analysis of eight randomised controlled trials using beta-carotene supplements showed a small increase in all-cause and cardiovascular mortality.8 Fruits and vegetables contain hundreds of different carotenoids, which may act synergistically with each other as well as with various vitamins, minerals and different types of dietary fibre.1 Consuming foods high in carotenoids with foods such as extra virgin olive oil, as in a typical Mediterranean diet, may increase their health-promoting effects.

Studies of fish consumption also illustrate the need for a whole food approach. A number of studies show that eating fish once or twice a week is associated with a lower incidence of cardiovascular disease1,9 and a reduced risk of stroke.10 Is this due solely to their omega-3 fatty acids and could a supplement therefore replace fish? Maybe not, according to two recent meta-analyses of studies using fish oil supplements that did not show any reduction in risk of major cardiovascular events or all-cause mortality in either healthy people or those with a history of cardiovascular disease.11,12

It is understandable that researchers want to identify protective factors in foods, but the reality is that foods cannot be reduced to single beneficial components. In practice, a reductionist approach can lead to dietary distortions by which foods with undesirable properties (eg, high sugar or salt) can be marketed as beneficial if fortified with a few selected nutrients.

The recent analysis of the Sydney Diet Heart Study also shows the folly of pushing a single nutrient while ignoring other aspects of the foods containing the nutrient. Beginning in the 1960s, participants were given safflower oil and margarine with a high linoleic acid content. Other components of the margarine, such as its high content of trans fat, and of the total diet were ignored. Leaving aside other possible problems with this study, the outcomes were increased rates of death from all causes, from coronary heart disease and from cardiovascular disease.13

The realistic — and wise — course of action is to look at diet in terms of foods and eating patterns rather than taking a reductionist approach and concentrating on a single nutrient that is almost never consumed on its own.

Australia’s dietary guidelines and the environmental impact of food “from paddock to plate”

Incorrect text: In “Australia’s dietary guidelines and the environmental impact of food ‘from paddock to plate’” published in the 21 January 2013 issue of the Journal (Med J Aust 2013; 198: 18-19), there was an error in the third paragraph of the article. The statement “Around half of Australia’s fisheries are overfished” is incorrect. It should read “Forty per cent of Australia’s managed fish stocks have been deemed overfished”, with a reference to: Srinivasan UT, Watson R, Sumaila UR. Global fisheries losses at the exclusive economic zone level, 1950 to present. Marine Policy 2012; 36: 544-549.