×

Electronic cigarettes: what can we learn from the UK experience?

Electronic cigarettes have the potential for substantial improvements in public health

Electronic cigarettes (e-cigarettes) have polarised the medical and public health communities in Australia and internationally. Some researchers describe them as the greatest opportunity to improve public health this century, with the potential to save millions of lives.1 Other commentators are concerned that they could renormalise smoking by increasing the visibility of a behaviour that resembles smoking, act as a gateway to smoking for young people and deter quitting.2

E-cigarettes are battery-powered devices that heat liquid nicotine and other chemicals (e-liquid) into an aerosol for inhalation. E-cigarettes simulate smoking by delivering nicotine as well as addressing the behavioural, sensory and social aspects of the smoking ritual.

As there is no tobacco or combustion, e-cigarettes do not produce the tar or carbon monoxide which are responsible for most of the health effects of smoking. E-cigarettes do contain some toxicants, but at very low levels which are unlikely to pose significant health risks, and they are considered to be much safer than combustible cigarettes.3

Although the sale, possession and use of nicotine-containing e-cigarettes without a permit are illegal in Australia, the devices clearly have appeal to smokers and are increasingly popular. Current use of e-cigarettes increased from 0.6% to 6.6% in current and former smokers over a 3-year period from 2010 to 2013. The rate of responders reporting that they had “ever used e-cigarettes” in this population increased from 9.6% to 19.7% over the same period. In 2013, 42.5% of users reported that their current brand contained nicotine.4

The United Kingdom experience

The UK has a more liberal regulatory environment for e-cigarettes, allowing the sale and use of nicotine-containing devices by adults aged 18 years or more.4 E-cigarettes are classified as consumer products and can be legally purchased online and from dedicated “vape” shops, pharmacies and other retail outlets. The most common reason for using e-cigarettes (“vaping”) is to reduce the health risks of smoking by stopping or reducing smoking.5

E-cigarettes are currently used by 2.6 million “vapers” in the UK.5 More than 1 million vapers are ex-smokers who have switched to vaping as a safer alternative to smoking and to avoid relapsing into smoking.5 Long-term use of safer nicotine products has been supported as a harm-reduction strategy in the UK since a landmark report of the Royal College of Physicians which concluded that:

smokers smoke predominantly for nicotine, that nicotine itself is not especially hazardous, and that if nicotine could be provided in a form that is acceptable and effective as a cigarette substitute, millions of lives could be saved.6

The remaining 1.4 million e-cigarette users (about 54%) in the UK continue to smoke tobacco as well as vaping (dual use).5 The net health implications of dual use are unclear but many dual users report reduced symptom severity,7 and a recent study found that there was decreased toxicant exposure from dual use, compared with continuing only to smoke and not use e-cigarettes.8 It has been proposed by some commentators that dual use may perpetuate smoking in some users who would otherwise have quit.2 However, some dual users will go on to quit smoking, just as many smokers who use nicotine replacement therapy (NRT) while smoking progress to abstinence.9

E-cigarettes are now the most popular aid for quitting smoking in England, being used in 38% of quit attempts.10 NRT is used in 23%, varenicline in 5%, and behavioural support in 3% of quit attempts.10 A cross-sectional population study of nearly 6000 English smokers found that those who used e-cigarettes in their most recent quit attempt were 60% more likely (after correcting for confounding variables) to be abstinent 12 months later than those quitting unaided or using over-the-counter NRT products. Self-reported quit rates were 20% for e-cigarettes, 15.4% for unaided quitting and 10.1% for NRT.11 It has been estimated that of 1 080 000 smokers who tried to quit using an e-cigarette in 2014 in England, 20 340 additional smokers were able to achieve long-term (1-year) abstinence because of the availability of e-cigarettes.12

A recent report from a trial conducted at a London smoking clinic suggested that adding e-cigarettes to standard behavioural support and other pharmacotherapies, such as NRT or varenicline, may further increase effectiveness,13 and this approach is endorsed by the UK National Centre for Smoking Cessation and Training and the UK public health agency, Public Health England (PHE).3

A recent independent review of the evidence commissioned by PHE concluded that e-cigarettes are around 95% safer than smoking, and that their use could be encouraged for smokers who have failed to quit with other methods or as a harm-reduction strategy for smokers who are not willing or able to quit.3 In the view of PHE, there are sufficient data to endorse the use of e-cigarettes while further research and monitoring continue.3

The PHE report has been criticised by some commentators who believe that the incomplete evidence does not yet allow such firm conclusions on efficacy and safety.14 Concerns have also been expressed about the potential for renormalising community smoking and the gateway effect for young people.14 Others have observed that the strong views on both sides of the debate are driven by ideology and predetermined opinions, particularly about acceptance of the harm-reduction model.

In the UK, there is no evidence that e-cigarettes are renormalising smoking. As e-cigarettes have become popular, quit attempts have increased and smoking prevalence has continued to fall.5,15

There is also no evidence so far of a gateway effect; ie, non-smokers taking up e-cigarettes and then progressing to smoking.3 Although some children and young people experiment with e-cigarettes, their regular use in this population is rare and is confined almost entirely to current or previous tobacco smokers.16 In adults aged 16 years and over who have not smoked previously, only 0.2% use e-cigarettes regularly, and there are no recorded instances of daily vaping.5,15

The UK data contrast with the findings of a cross-sectional Polish study that reported a significant rise in smoking and e-cigarette use by 15–19-year-old students between 2010–2011 and 2013–2014.17 Smoking rates and e-cigarette use increased from 23.9% to 38% and 5.5% to 21.9% respectively during these periods. The rate of dual use in 2013–2014 was also high, at 72.4% of e-cigarette users.

Implications for Australia

Based on the UK experience, e-cigarettes may be another useful tool for helping Australian smokers who are unwilling or unable to quit using the currently available treatments.

The real-world effectiveness of e-cigarettes for smoking cessation in the English study is promising, and is consistent with the results of clinical trials.18,19 However, the quality of the evidence overall from trials is low because of the small number of studies available, and the outcomes need to be interpreted cautiously.

As with NRT, the best quitting results are likely when e-cigarettes are used with behavioural support. E-cigarettes can also be used in conjunction with other approved pharmacotherapies, such as varenicline or nicotine patches for improved outcomes.13

The UK data also suggest a valuable role for e-cigarettes in harm reduction for Australian smokers who are not willing or able to give up nicotine or the smoking ritual. If a large number of smokers switched to long-term use of e-cigarettes, this would have an immediate and substantial positive impact on public health.6

There has been no indication so far in the UK of some of the potential, negative unintended consequences of widespread e-cigarette use. There is no evidence of a gateway effect or of renormalisation of smoking behaviour. On the contrary; e-cigarettes may be acting as a gateway out of smoking, but it is early in the cycle of e-cigarette uptake, and their impact on smoking behaviour will need careful monitoring in the future.

It has become apparent from the UK experience that some vapers will continue to smoke and vape in the long term, typically with reduced smoke intake, but even reduced smoking poses some dangers and dual users should be encouraged and helped to stop smoking as soon as possible.

Careful, proportionate deregulation of e-cigarettes could give Australian smokers access to the benefits of vaping while minimising potential harm to public health. Appropriate regulations could include banning vaping in smoke-free areas; bans on sales of e-cigarettes to minors; and restricted advertising, improved quality control, child-resistant e-liquid containers and labelling requirements for e-cigarettes.20

Conclusion

The UK experience with e-cigarettes has so far been positive. E-cigarettes are helping some smokers to quit or reduce their tobacco intake. Others are able to substantially reduce harm with the switch to a safer nicotine delivery device. The concerns that underlie the strict Australian approach to e-cigarettes — ie, that they could renormalise smoking, act as a gateway to smoking for children, and reduce quitting rates — have not been supported by evidence from the UK.

Regulation of e-cigarettes in Australia should be liberalised to allow smokers the opportunity to benefit from their use. The popularity and widespread uptake of e-cigarettes creates the potential for large-scale improvements in public health in Australia and for faster progress towards the endgame, the ultimate demise of combustible tobacco.

Health gets a guernsey in Paris

The right to health has been explicitly recognised in the agreement negotiated at the United Nations Paris climate change talks, boosting hopes of an increasing focus on the health effects of global warming.

In its preamble, the Paris Agreement directed that, when taking action on climate change, signatories should “respect, promote and consider their respective obligations on…the right to health”.

Director of the World Health Organisation’s Department of Public Health, Environmental and Social Determinants of Health, Dr Maria Neira, hailed the declaration as a “breakthrough” in recognising the health effects of climate change.

“This agreement is a critical step forward for the health of people everywhere,” Dr Neira said. “The fact that health is explicitly recognised in the text reflects the growing recognition of the inextricable linkage between health and climate.”

Dr Neira said that health considerations were essential to effective plans to adapt to climate change and mitigate its effects, and “better health will be an outcome of effective policies”.

Under the Paris deal, countries have expressed an “ambition” to limit global warming to less than 2 degrees Celsius, the point at which science suggests climate change becomes untenably dangerous.

While avoiding setting an explicit target, the signatory countries, including Australia, committed to “pursuing efforts to limit the temperature increase to 1.5 degrees Celsius”.

Attempts to orchestrate concerted global climate change action have in the past been frustrated by arguments over who should bear the greatest responsibility for causing climate change and, as a consequence, who carries the greatest obligation to ameliorate its effects.

Developing countries have argued that industrialised nations have become rich on fossil fuel-based economic activity and should bear the greater share of the burden in adopting to its consequences.

But developed countries have countered that any progress they make in curbing greenhouse gas emissions should not be simply offset by an increase in emissions from emerging economies.

The Paris agreement has sought to break the impasse by detailing a framework of “differentiated responsibilities” for climate action. Developed countries are expected to take the lead in reducing greenhouse gas emissions, but developing nations are also expected to contribute.

To help drive the global response, it is expected that by 2020, countries will contribute $US100 billion a year to a global fund to help finance emission reduction and climate change adaptation measures.

Though the agreement does not include any enforcement mechanism, countries are required to provide an update on their climate change action each five years, and each successive update has to be at least as strong as the current one, leading to what the framers of the document will be a “ratcheting up” of measures over time.

The promising outcome to the Paris meeting followed a call by the AMA and other peak medical groups worldwide for more concerted action to prepare for and mitigate the health effects of climate change.

In an updated Position Statement on Climate Change and Human Health released last year, the AMA highlighted multiple health threats including increasingly frequent and severe storms, droughts, floods and bushfires, pressure on food and water supplies, rising vector-borne diseases and climate-related illnesses and the mass displacement of people.

AMA President Professor Brian Owler said significant health and social effects of climate change were already evident, and would only become more severe over time.

“Nations must start now to plan and prepare,” Professor Owler said. “If we do not get policies in place now, we will be doing the next generation a great disservice. It would be intergenerational theft of the worst kind – we would be robbing our kids of their future.”

The AMA’s Position Statement on Climate Change and Human Health can be viewed at:  position-statement/ama-position-statement-climate-change-and-human-health-2004-revised-2015

Adrian Rollins

Chromium supplements linked to carcinogens: research

An Australian research team has found concerns with the long-term use of nutritional supplements containing chromium.

UNSW and University of Sydney researchers say chromium partially converts into a carcinogenic form when it enters cells.

The findings are published in the chemistry journal Angewandte Chemie.

There are primarily two forms of chromium: chromium (III) forms such as trivalent chromium (III) picolinate are sold as nutritional supplements. Hexavalent chromium (VI) is its ‘carcinogenic cousin’.

The team was led by Dr Lindsay Wu from UNSW’s School of Medical Sciences and Professor Peter Lay from the University of Sydney’s School of Chemistry. It treated animal fat cells with chromium (III) in a labatory and created a map of every chemical element contained within the cell using a synchrotron’s X-ray beam.

Related: Supplement claims rejected

“The high energy X-ray beam from the synchrotron allowed us to not only see the chromium spots throughout the cell but also to determine whether they were the carcinogenic form,” said Dr Wu.

“We were able to show that oxidation of chromium inside the cell does occur, as it loses electrons and transforms into a carcinogenic form.

“This is the first time this was observed in a biological sample,” Dr Wu said.

Professor Lay said the finding raises concerns over possible cancer causing possibilities of chromium supplements.

“With questionable evidence over the effectiveness of chromium as a dietary supplement, these findings should make people think twice about taking supplements containing large doses of chromium,” Professor Lay said.

“However, additional research is needed to ascertain whether chromium supplements significantly alter cancer risk.”

Related: Real food, supplements help the elderly stay healthy

There is controversy over whether the dietary form of chromium is essential.

Chromium supplements are sometimes used for the treatment of metabolic disorders however they are also commonly used for weight-loss and body building.

Australia’s current National Health and Medical Research Council Nutrient Reference Values, which are currently under review, recommend 25-35 micrograms of chromium daily as an adequate intake for adults.

Trace amounts of chromium (III) can be found in some foods however these findings are unlikely to apply.

Latest news:

 

 

‘Get kids out of detention’

The Australian Medical Association has released its revised Position Statement on the Health Care of Asylum Seekers.

The statement reaffirmed a long-held believe that all asylum seeker children should be moved out of immigration detention.

AMA President Professor Brian Owler said they acknowledge the Government has significantly reduced the numbers of children in detention but more can be done.

Related: MJA – Let the children go — advocacy for children in detention by the Royal Australasian College of Physicians

“Detention has severe adverse effects on the health of all asylum seekers, but the harms in children are more serious.

“Some of the children have spent half their lives in detention, which is inhumane and totally unacceptable.

“These children are suffering extreme physical and mental health issues, including severe anxiety and depression.

“Many of these conditions will stay with them throughout their lives,” Professor Owler said.

According to the latest Immigration Detention and Community Statistics Summary, as at 30 November 2015, there were 104 children held in immigration detention facilities within the Australian mainland, 70 children held in detention in Nauru, and 331 children in community detention.

Related: Nauru detention unsafe for children: Senate inquiry

The position statement also confirmed the AMA position that those who are seeking or have been granted asylum should have the right to appropriate medical care.

“Refugees and asylum seekers living in the community should have access to Medicare and the Pharmaceutical Benefits Scheme, state welfare and employment support, and appropriate settlement services,” Professor Owler said.

Other recommendations include:

  • There should be a maximum time that an asylum seeker can spend in detention
  • Those in detention should have access to appropriate specialist services
  • Anyone who has been in detention should be able to access their medical records after their release or deportation
  • Doctors treating asylum seekers who are transferred should be able to provide appropriate handover of relevant documents.
  • Doctors shouldn’t be obliged to artificially feed a hunger striker

Visit the AMA’s site to read their position statement.

Latest news:

 

[Articles] Global, regional, and national comparative risk assessment of 79 behavioural, environmental and occupational, and metabolic risks or clusters of risks in 188 countries, 1990–2013: a systematic analysis for the Global Burden of Disease Study 2013

Behavioural, environmental and occupational, and metabolic risks can explain half of global mortality and more than one-third of global DALYs providing many opportunities for prevention. Of the larger risks, the attributable burden of high BMI has increased in the past 23 years. In view of the prominence of behavioural risk factors, behavioural and social science research on interventions for these risks should be strengthened. Many prevention and primary care policy options are available now to act on key risks.

Risk factors and burden of acute Q fever in older adults in New South Wales: a prospective cohort study

Q fever is a highly infectious zoonotic disease caused by the bacterium Coxiella burnetii.13 The main reservoirs for this bacterium are domestic and wild animals, and it can be excreted in their urine, faeces, milk and products of conception, and can survive in harsh environmental conditions.1 Transmission to humans occurs mainly through direct contact with infected animal products or by inhalation of contaminated dust or aerosols.4 In humans, Q fever manifests as an acute flu-like illness or, less frequently, with pneumonia or hepatitis; infection is often asymptomatic.1 Chronic Q fever, most frequently presenting as endocarditis, occurs in about 5% of symptomatic cases.1 Q fever fatigue syndrome is the most frequently reported sequela of acute infection (10%–20% of cases).5

A Q fever vaccine is available in Australia and is recommended for those at high occupational risk of infection.6,7 During 2001–2006, the federal government funded the National Q Fever Management Program (NQFMP) in various states, including New South Wales; under this program, people at high risk were screened and vaccinated, including abattoir workers, sheep shearers, and sheep, dairy and beef cattle farmers and their farm workers. Uptake of the vaccine was almost 100% among abattoir workers and about 43% among farmers; the program significantly reduced the number of notified cases of Q fever in abattoir workers.7 National notification rates suggest there was some decline in the incidence of Q fever during 2006–2009 — from 2.0 to 1.4 notified cases per 100 000 population — but this was followed by a gradual return to 2.0 cases per 100 000 population by 2014; the highest reported rates were among adults aged 45–69 years.8

Most epidemiological studies have been retrospective and focused on specific occupational groups,9,10 and there are only limited data on factors associated with Q fever risk outside these populations. We therefore examined the risk and acute burden of Q fever in a population-based prospective study of Australian adults aged 45 years and over living in NSW.

Methods

Participants

We used data for participants recruited in NSW during 2006–2009 for a prospective study of adults aged 45 years and over (the Sax Institute’s 45 and Up Study); the recruitment procedures have been published elsewhere.11 In brief, NSW residents aged 45 years or over were randomly selected from the Australian Medicare database and invited to participate. The 45 and Up Study oversampled residents in rural and remote areas, and those aged 80 years and over. At recruitment, participants completed a baseline questionnaire that provided information on their sociodemographic factors, behaviour and health.12

Participants consented to long-term follow-up and linkage of their data.11 For the study described in this article, participants were linked to the NSW Notifiable Conditions Information Management System (NCIMS), the NSW Admitted Patient Data Collection (APDC) and the NSW Registry of Births, Deaths and Marriages (RBDM). The NSW Centre for Health Record Linkage (CHeReL) performed the linkage independently of the study investigators, using probabilistic matching.

The NCIMS database records all notifications of Q fever in NSW residents; it includes information on the date of onset and details of laboratory confirmation, including the type of specimen used. Notifications of Q fever require laboratory definitive evidence or laboratory suggestive evidence together with clinically compatible disease (Box 1).13 The APDC records information about all admissions to hospitals in NSW, including the date of admission and discharge, the primary diagnosis, and up to 49 secondary diagnoses affecting treatment or length of stay, coded according to the International Classification of Diseases, 10th revision, Australian modification (ICD-10-AM). The RBDM records the date of death of NSW residents.14 For this study, the data from the NCIMS and RBDM were complete to 31 December 2012, and the APDC data were complete to 30 June 2012.

All participants provided written informed consent. This study was approved by the NSW Population Health Research Ethics Committee (approval number, 2010/12/292) and the University of New South Wales Human Research Ethics Committee.

Outcome definitions

The study outcomes were incident Q fever diagnoses (cases) and the proportion of these patients who were admitted to hospital. We defined participants as having an incident Q fever diagnosis if they had a linked record of notified Q fever in the NCIMS database after recruitment. Cases of Q fever with linked hospital records between 6 weeks before and after the Q fever notification date were examined, and classified as follows:

  • primary Q fever: at least one hospitalisation for which the ICD-10-AM code A78 was recorded as the primary diagnosis

  • secondary Q fever: at least one record including A78 as a secondary diagnosis

  • Q fever-related: no A78 codes but one of the following primary diagnoses recorded: A49.9 (bacterial infection, unspecified), B17.9 (acute viral hepatitis, unspecified), B34.9 (viral infection, unspecified), J18.9 (pneumonia, unspecified organism), or R50.9 (fever, unspecified);15,16 and

  • presumed unrelated: none of the above recorded.

The 6-week window was chosen because most cases of acute illness resolve within 6 weeks of onset.17 The number of deaths among notified Q fever patients within 6 weeks of the recorded onset of disease were determined.

Statistical analyses

Analyses excluded those with a record of Q fever notification before study recruitment. Person-years at risk were calculated from the date of study recruitment to the date of Q fever onset or death, or 31 December 2012, whichever occurred first. Hospitalisation analyses were restricted to cases with a diagnosis date on or before 20 May 2012; ie, 6 weeks before the last date for which we had complete hospital records. This restriction was imposed to ensure that all hospitalisation events within 6 weeks of the onset of Q fever were captured.

The incidence of notified Q fever cases was estimated according to age (stratified as 45–54 years, 55–64 years and 65 years or older); sex; area and type of residence (a composite variable that includes both area of residence — major city, inner regional or outer regional/remote/very remote, according to the Accessibility/Remoteness Index of Australia [ARIA+] — and accommodation type — living on a farm or not); smoking history (never or ever smoked); and number of hours spent outdoors each day (less than 4, 4 to less than 8, 8 hours or more).

We used Cox proportional hazard models to estimate unadjusted (univariate) hazard ratios (HR) for Q fever according to these characteristics. Variables associated with Q fever (P < 0.1) were included in a multivariable model, with the final model determined using a backward elimination method. Variables for which P < 0.05 were retained in the final model. Missing categories were only included in the multivariable model and reported if the proportion of missing cases was greater than 5%.

We also examined the proportion of notified patients who were hospitalised, their concurrent diagnoses on admission, and, for those with a Q fever-coded hospitalisation, the median length of stay. Kruskal–Wallis tests were used to compare the median number of hours spent outdoors each day according to area and type of residence. P < 0.05 was defined as statistically significant. All analyses were performed with Stata 12 (StataCorp).

Results

After excluding 202 participants with notified Q fever before recruitment, our analysis included 266 906 participants who were followed up for 1 254 650 person-years (mean follow-up time, 4.7 ± 1.0 years per person). The mean recruitment age was 62.7 ± 11.2 years, and 53.6% were women. There were 45 participants with a linked Q fever notification during follow-up (for 44 there was positive serological evidence; for one, the diagnosis method was unknown).

In our study population, the incidence of notified Q fever was 3.6 (95% CI, 2.7–4.8) per 100 000 person-years. The relationship of incidence with various sociodemographic characteristics is shown in Box 2. In unadjusted models, age (P = 0.01), sex (P = 0.03), area and type of residence (P < 0.001 for trend), and time spent outdoors each day (P < 0.001 for trend) were significantly associated with Q fever notification, while smoking was not (P = 0.8). Only age (P = 0.03), sex (P = 0.02), and area and type of residence (P < 0.001 for trend) remained significant in the multivariable model. There was a gradient of increasing risk according to geographic area and residence on a farm. Those living on a farm in outer regional/remote areas were at greatest risk, followed by those living on a farm in inner regional areas, with those not living on farms least at risk (Box 2). The relative risk of Q fever for those aged 65 years or over was significantly lower than for younger participants, and was also lower for women than men (Box 2). The amount of time spent outdoors each day was related to the area and type of residence, ranging from 2.6 hours for living in a major city to 4.6 hours for those living on a farm in outer regional/remote areas (Kruskal–Wallis test, P < 0.001). However, differences in time outdoors did not remain significant (P = 0.4 for trend) after adjustment for area and type of residence in the multivariable model.

Of 45 incident notifications, we had complete follow-up of hospital records for 39 patients. Of these, 17 (44%) were hospitalised at least once (for any cause) within 6 weeks of the recorded disease onset date (before or after onset). The hospitalisation was coded as being for Q fever in 15 cases (seven patients with primary Q fever or secondary Q fever, eight as Q fever-related). The median length of stay for patients with these diagnoses was 4 days (interquartile range, 3–9 days). There were no deaths or intensive care unit stays recorded for the notified cases.

According to the APDC database, 11 participants had been hospitalised with primary Q fever or secondary Q fever, but four of these were not recorded as Q fever cases in the NCIMS database.

Discussion

This is the first population-based prospective study of the risk and burden of acute Q fever in a general adult population in Australia. We found that a clear increase in the risk of notified Q fever in adults was associated with living on a farm and with geographic remoteness. Those living on farms in outer regional and remote areas were at highest risk, and the hazard was lowest for those living in major cities. Risks were also greater for those under 65 years of age and for men, but risk was not increased for smokers or associated with greater time spent outdoors. Fifteen of 39 notified Q fever cases (38%) were hospitalised with a diagnosis consistent with Q fever.

In this study, we observed an incidence of notified Q fever of 3.6 per 100 000 person-years, with the highest rate among those aged 55–64 years (5.4 per 100 000 person-years). This is broadly consistent with Q fever notification rates for the total NSW population aged 45 years or over reported during 2009–2012 (2.9 per 100 000 persons, with the highest average annual rates for those aged 55–64 years: 4.1 per 100 000 persons).7,8,18 The slightly higher disease burden in our study is not surprising, as the 45 and Up Study oversampled the residents of rural and remote NSW, where Q fever notification rates are much higher than in urban centres.

We estimated that the notified Q fever risk was about five times higher for adults living on a farm in inner regional areas and about 12 times higher for those living on a farm in outer regional and remote areas than for those in inner regional areas not living on a farm. This finding is consistent with other reports that found farmers to be at greater risk of Q fever,18,19 and suggests that immunisation coverage in this group is inadequate. Even though the NQFMP provided free vaccination to farmers, uptake was estimated to be only about 43%, and in NSW the vaccination program ended in 2004.7 After allowing for workforce turnover, it is likely that an even lower proportion of current farmers have been vaccinated. An alternative explanation would be that vaccine-induced immunity has waned, but there is good evidence that the vaccine is highly effective, with immunity lasting for at least 5 years and probably for life.20

Massey and colleagues19 have suggested that demographic factors other than occupation should be identified to better define risk groups, as a fifth of notified Q fever cases from rural areas did not report occupational exposure to Q fever. Similarly, the recent major Q fever outbreak in the Netherlands found that people living near farms, but not specifically working on one, were also at increased risk of disease.2123 We did not have information on the occupations of participants in our study, but our finding of increased Q fever risk for those living in more remote areas but not living on a farm are consistent with the results of these other studies. Taken together, they support calls for medical practitioners in regional and remote Australia to routinely consider Q fever in their differential diagnosis of acute flu-like illnesses, even for patients not living on farms.24

We also examined other factors potentially relevant to Q fever risk. Time spent outdoors was not significant in our multivariable model, as any effect was almost completely explained by the area and type of residence variable. There was no indication of an increased risk for smokers. A significant fraction (44%) of notified Q fever cases had been hospitalised. This is within the higher range of hospitalisation estimates reported by an extensive review.1 Studies suggest that up to 20% of those with Q fever will develop chronic conditions, such as endocarditis or chronic fatigue syndrome, that also require health care outside of hospitals, and which also entail losses of productivity and quality of life.14,2527 This lends further weight to calls for improving disease prevention efforts.

We identified 15 cases of Q fever for which a hospitalisation code consistent with Q fever was recorded, but only seven were specifically coded as Q fever (ICD-10-AM, A78). This suggests that limiting analysis to hospital admissions specifically coded as primary or secondary Q fever diagnoses is likely to substantially underestimate the true burden of Q fever-related morbidity. We also identified four participants linked to hospitalisations coded as Q fever, but for which there was no record of Q fever in the NCIMS database. It is possible that these were clinically compatible cases that did not meet the case definition of confirmed Q fever because of negative diagnostic test results, and were therefore not notified, or it may indicate under-reporting of genuine cases.

To our knowledge, our study is the first using prospectively ascertained events to examine the risk and burden of Q fever in older adults in a general population of Australian residents. Our study encompassed a time period during which no major Q fever outbreaks were reported, and thus more accurately assesses the risk and burden of endemic Q fever. Potential limitations include the fact that we used notification data to identify Q fever cases, and such data usually underestimate the number of infections; they may also depend on the propensity of physicians to consider the diagnosis, which may differ according to the characteristics of their patients. In addition, we had no data on the occupations or the vaccination status of participants. The numbers of Q fever cases were relatively small, leading to wide confidence intervals for the risk estimates. Similarly, the small numbers meant that we could not stratify the “ever smoked” category into current and past smokers. Finally, the study cohort was probably healthier than the overall NSW population of the same age range, as indicated by a lower rate of smoking.11

In conclusion, our results support current recommendations for Q fever vaccination of farmers and add to the existing body of evidence that suggests targeting a broader, geographically based population in regional and remote regions is required to reduce the burden of Q fever in Australia.

Box 1 –
Australian national notifiable diseases case definitions — Q fever13


Confirmed case

A confirmed case requires either:

1. Laboratory definitive evidence

OR

2. Laboratory suggestive evidence AND clinical evidence.

Laboratory definitive evidence

1. Detection of Coxiella burnetii by nucleic acid testing

OR

2. Seroconversion or significant increase in antibody level to Phase II antigen in paired sera tested in parallel in absence of recent Q fever vaccination

OR

3. Detection of C. burnetii by culture (note this practice should be strongly discouraged except where appropriate facilities and training exist.)

Laboratory suggestive evidence

Detection of specific IgM in the absence of recent Q fever vaccination.

Clinical evidence

A clinically compatible disease


Box 2 –
Incidence of and hazard ratios for notified Q fever in NSW according to various sociodemographic characteristics, 2006–2012

Cases

Population

Person-years

Incidence per 100 000 person-years (95% CI)

HR* (95% CI)

Adjusted HR (95% CI)


All participants

45

266 906

1 254 650

3.6 (2.7–4.8)

Age group

45–54 years

16

78 756

377 770

4.2 (2.6–6.9)

1.00

1.00

55–64 years

22

85 654

408 515

5.4 (3.5–8.2)

1.27 (0.67–2.42)

1.20 (0.63–2.29)

≥ 65 years

7

102 496

468 365

1.5 (0.7–3.1)

0.35 (0.14–0.85)

0.39 (0.16–0.96)

Sex

Men

28

123 766

579 608

4.8 (3.3–7.0)

1.00

1.00

Women

17

143 140

675 042

2.5 (1.6–4.0)

0.52 (0.28–0.95)

0.48 (0.26–0.88)

Smoking

Never

27

152 427

718 838

3.7 (2.6–5.5)

1.00

na

Ever

18

113 052

529 243

3.4 (2.1–5.4)

0.90 (0.50–1.64)

na

Area and type of residence

Major city

120 267

562 377

0.2 (0.1–1.3)

0.07 (0.01–0.55)

0.07 (0.01–0.54)

Inner region; not on farm

10

84 699

398 756

2.5 (1.3–4.7)

1.00

1.00

Outer region/remote; not on farm

11

42 006

198 012

5.5 (3.1–10.0)

2.21 (0.94–5.21)

2.21 (0.94–5.21)

Inner region; on farm

6

9 082

43 511

13.8 (6.2–30.6)

5.51 (2.00–15.15)

4.95 (1.79–13.65)

Outer region/remote; on farm

17

10 657

51 090

33.3 (20.7–53.5)

13.28 (6.08–29.01)

11.98 (5.47–26.21)

Time spent outdoors§

< 4 hours/day

19

172 874

814 719

2.3 (1.5–3.6)

1.00

1.00

4–7 hours/day

14

57 363

269 247

5.2 (3.1–8.8)

2.23 (1.12–4.45)

1.21 (0.58–2.51)

≥ 8 hours/day

6

16 432

76 995

7.8 (3.5–17.3)

3.35 (1.34–8.38)

1.20 (0.45–3.19)

Missing data

6

20 237

93 689

6.40 (2.9–14.2)

2.74 (1.09–6.86)

1.93 (0.75–4.93)


HR = hazard ratio; na = not applicable. *Unadjusted results. †Variables in final model: age group, sex, area and type of residence. ‡Number of cases not displayed due to small numbers. §Adjusted for age group, sex, and area and type of residence.

Americans shooting themselves in the foot: the epidemiology of podiatric self-inflicted gunshot wounds in the United States

The United States is home to about one third of all firearms worldwide, with 90 guns for every 100 American citizens.1 It is therefore perhaps not surprising that gunshot wounds (GSWs) are among the leading causes of injury in the US.2,3 The statistics indicate that 93% of the wounded are men, 56% are unemployed, and 56% tested positive for drugs or alcohol after the incident.46 As the incidence of GSWs is increasing, epidemiological studies that provide insight into their general nature and the circumstances in which they occur are useful for developing preventive education. Further, an understanding of terminal ballistics is important for determining the appropriate clinical management of GSWs.

The extent of injury inflicted by a GSW is determined by the energy of the primary projectile, its dissipation in the tissue, and the generation of secondary projectiles following osseous injury. The kinetic energy of a bullet before impact is equal to half its mass multiplied by its velocity squared; the energy of a projectile thus increases exponentially with its velocity. In order to maximise mass (and minimise energy loss caused by air resistance), bullets are often made with pointed or rounded tips from metals with a high specific gravity, such as lead. The energy transferred to the tissue after impact is the difference in kinetic energy of the bullet as it enters and leaves the tissue. This difference is dependent on the bullet’s diameter on impact and the density of the tissue. The more a bullet deforms or mushrooms on impact, the greater the amount of energy transferred to the tissue.7,8

High-velocity projectiles create large temporary cavities that fill with water vapour, causing tissue damage and wound contamination distal to the primary tract of the bullet. When a bullet collides with a dense object, such as bone, secondary missiles may be generated, the number of which increases with the velocity of the bullet.9 These secondary missiles have less predictable trajectories and often do more soft tissue damage than the primary projectile. The velocity of the bullet is thus a primary determinant of tissue damage.

The foot has a number of anatomical and biomechanical features that make it unique in terms of GSW injury and management. The function of the foot depends on its ability to painlessly and efficiently transfer the energy generated by the leg muscles into locomotion. Unlike long bone and other joint injuries, low-velocity GSWs to the foot often result in significant morbidity, and are managed in the same manner as high-velocity injuries. The ratio of bone to soft tissue in the foot is high, with a particularly large number of articular surfaces. More than 80% of GSWs to the foot result in osseous injury,10 and such fractures frequently generate secondary projectiles that damage the densely populated neurovascular structures. The resulting inflammation and haemorrhage within the restricted fascial compartments of the foot predispose to compartment syndrome and other complications.

Management includes antibiotic therapy, operative debridement, bone stabilisation, revascularisation and soft tissue coverage. Low-velocity GSWs to the foot have traditionally been treated with intravenous antibiotic therapy for 1–5 days,1114 followed by operative assessment of soft tissue contamination and irrigation. However, it is now generally accepted that both low- and high-velocity injuries require careful debridement of non-viable soft tissue and non-essential osseous fragments to prevent necrosis and wound infections.1320 As mentioned earlier, the vast majority of GSWs to the foot involve intra-articular osseous injury. Even low-velocity injuries may require both internal and external percutaneous fixation of fractures to achieve adequate alignment.21 High-velocity injuries are often allowed to heal by secondary intention, while others may require wound closure with myocutaneous flaps, skin grafting or, in extreme cases, amputation.

Until now there has been no large-scale epidemiological examination of the injury characteristics and circumstance of GSWs to the foot. Given the anatomical and biomechanical features of the foot, these GSWs are unique in their presentation, and, while they have been studied on a case-by-case basis,2224 the overarching trends of self-inflicted GSWs to the foot have not been investigated in a large sample. We undertook a large-scale epidemiological examination of Americans who had shot themselves in the foot.

Methods

Study sample

Using a stratified probability sample of all US hospitals with more than six beds that provide 24-hour accident and emergency services, the National Electronic Injury Surveillance System (NEISS) collected data for the period 1993–2010 as part of the Firearm Injury Surveillance Study. Based on the number of emergency department visits per year, hospitals were stratified as very large, large, medium or small. An additional stratum for children’s hospitals was also used. Between 1993 and 1996, 91 emergency departments were included in the sample. An additional 10 hospitals were added between 1997 and 1999, with two dropouts between 2000 and 2002; 99 hospitals were included in the sampling frame 2002–2010.

Data collection

NEISS, the primary data collection body for the Consumer Product Safety Commission, was responsible for data collection. Data on initial emergency department visits that resulted from non-fatal firearm-related injuries were extracted from the patients’ medical records.

Outcomes

The characteristics of the patients and the conditions in which each sustained self-inflicted GSWs to the foot were the primary outcomes.

Statistical analysis

All statistical analysis was conducted in Stata 12 (StataCorp). Participants were identified as either generic firearm victims or patients who had a self-inflicted GSW to the foot. χ2 tests were used to compare the categorical variables of groups; ie, sex, age group, marital status, illicit drug use, involvement in criminal activities, weapon used, location of incident, and diagnosis. Logistic regression was undertaken for sex and marital status (married v not married).

Results

Of the 69 111 reported firearm-related injuries, 667 (1.0%) were self-inflicted GSWs to the foot. Individuals who shot themselves in the foot were typically men (597, 89.6%) aged 15–34 years (345, 51.7%). Incidents generally occurred in the home (381, 51.1%), involved a handgun (208, 31.2%) or BB gun (228, 64.2%) while the individual was neither committing a crime nor under the influence of alcohol. Significant differences between individuals who shot themselves in the foot and those who had other firearm-related injuries with respect to sex (χ2 = 3.19, P = 0.048), age group (χ2 = 116.39, P < 0.0001), marital status (χ2 = 87.18, P < 0.0001), illicit drug use (χ2 = 24.49, P < 0.0001), involvement in criminal activities (χ2 = 330.79, P < 0.0001), weapon used (χ2 = 457.56, P < 0.0001), location of the incident (χ2 = 571.16, P < 0.0001) and the physician’s diagnosis (χ2 = 273.18, P < 0.0001) were noted (Box 1).

Logistic regression indicated that individuals who shot themselves in the foot were significantly more likely than individuals with other firearm-related injuries to be male (odds ratio [OR], 1.28; 95% CI, 1.0–1.7) and married (OR, 2.6; 95% CI, 2.1–3.4).

Incidences of shooting oneself in the foot were most common in October, November and December (Box 2).

Discussion

Contrary to popular belief, incidents of Americans shooting themselves in the foot are relatively rare; the characteristics of these incidents, however, are unique. When these auto-foot shooters were compared with individuals who had sustained other firearm-related injuries, significant differences were noted in the demographic characteristics of the victim/assailant, weapon of choice, the circumstances of the incident, and the nature of the injury itself.

There are several limitations that must be acknowledged in the interpretation of these data. The study included only individuals who presented to US emergency departments with non-fatal firearm-related injuries, so that our comparisons cannot be generalised to the broader population. The primary source of most data was the individual who had shot themselves in the foot; while data about the injury were provided by health care professionals, information about the incident itself may be subject to self-report biases. A social desirability bias may have caused under-reporting of self-inflicted wounds, as individuals who shoot themselves in the foot may not be entirely forthcoming about the nature and cause of their injuries.

Never-married men between the ages of 15 and 34 years were the most common perpetrators of self-inflicted GSWs to the foot. Due to the disproportionate number of males who possess firearms, it is to be expected that the prevalence of these injuries would be higher among men. Of interest, however, was the strength of the relationship between being married and shooting oneself in the foot when compared with the odds of other firearms-related injuries and non-married self-saboteurs. These results are consistent with anecdotal reports from disgruntled spouses and depictions of married men in the mainstream media, such as sitcoms and reality television programs. However, due to the nature of our data, evidence-based generalisations to the broader American population cannot conclusively be made.

Shooting oneself in the foot was extremely uncommon during the commission of a crime or while under the influence of drugs; only five individuals shot themselves in the foot while committing a crime. Given that inhibitions are reduced and cognitive capacity diminished by drug use, it is a somewhat counterintuitive finding that the association of illicit drug use with shooting one’s foot was not stronger. An investigation into the relationship between alcohol use and self-inflicted podiatric injuries is an area for future research, given the ease of access to alcohol and the prevalence of alcohol use in other firearm-related injuries.

There was a strong positive correlation between the month of the year and the number of self-inflicted GSWs to the foot; a disproportionate number of incidents occurred in October, November and December. It is notable that these trends were much stronger than for other firearm-related incidents, with a relatively constant number of these incidents throughout the year.

The epidemiology of firearm-related podiatric trauma has been neglected until now; to our knowledge, ours is the first large-scale epidemiological investigation of GSWs to the foot or of self-inflicted GSWs to the foot. Given the anatomical and biomechanical features of the foot, the nature of the wounds caused by GSWs is unique. A major epidemiological study is required to examine overarching trends in the circumstances and scenarios in which these events occur. Although it may not be possible to prevent Americans from shooting themselves in the foot, large-scale investigations of the nature of these incidents provides invaluable information for those at greatest risk. Particular caution must be taken during the festive season if one is to avoid being caught under the missing toe.

Box 1 –
Characteristics of self-inflicted gunshot wounds to the foot

Self-inflicted gunshot wounds to the foot


Other firearm-related injuries


P*

Number

Percentage

Number

Percentage


Number

667

1.0%

68 444

99.0%

Demographics

Sex (male)

597

89.6%

59 562

87.0%

0.048

Age

0–14 years

158

23.7%

7 691

11.2%

< 0.0001

15–34 years

345

51.7%

45 286

66.2%

35–54 years

134

20.1%

12 109

17.7%

≥ 55 years

30

4.5%

2 843

4.2%

Marital status

Married

103

24.0%

6 117

11.6%

< 0.0001

Never married

154

35.9%

26 826

51.0%

Divorced or separated

21

4.9%

1 262

2.4%

Other

8

1.6%

635

1.2%

Not stated

143

33.3%

17 760

33.8%

The incident

Drugs involved

11

1.7%

1 644

2.4%

< 0.0001

Crime involved

5

1.0%

11 680

17.1%

< 0.0001

Weapon

Handgun

208

31.2%

19 002

27.8%

< 0.0001

Rifle

80

12.0%

3 169

4.6%

Shotgun

60

9.0%

2 697

3.9%

BB gun

228

64.2%

10 094

14.8%

Unknown

91

13.6%

33 482

48.9%

Location

Home

381

57.1%

14 661

21.4%

< 0.0001

Farm

5

0.8%

109

0.2%

Apartment or condominium

1

0.2%

75

0.1%

Street or highway

12

1.8%

14 049

20.5%

Other public area

24

3.6%

7 500

11.0%

Mobile home

1

0.2%

112

0.2%

School

1

1.0%

276

0.4%

Recreational area

18

18.0%

1 140

1.7%

Unknown

224

33.6%

30 473

44.5%

The injury

Diagnosis

Amputation

2

0.3%

128

0.2%

< 0.0001

Contusion or abrasion

7

1.1%

3 936

5.8%

Foreign body

164

24.6%

7 289

10.7%

Fracture

75

11.2%

3 738

5.5%

Laceration

25

3.8%

7 818

11.4%

Puncture

265

39.7%

24 317

35.5%

Avulsion

1

0.2%

99

0.1%

Other

128

19.2%

16 680

24.4%


∗Self-inflicted wounds v other firearm-related wounds. †Number of injuries and percentage of all firearm-related injuries. All other percentages in the table are column percentages.

Box 2 –
Gunshot wounds in the United States, by month; expressed as a percentage of all foot- or non-foot-related incidents

Oversleeping linked to increased mortality

It’s not just smoking and high alcohol consumption that we should advise our patients to avoid if they want to live a long life.

A Sydney University study has found that regularly sleeping longer than nine hours a night also can increase the risk of mortality.

The study, published in PLOS One, found that on its own, regular oversleeping meant a 44% increase in risk of death over the six-year study period.

It also found that sitting in a chair for more than seven hours in a 24 hour period can be a big no-no for health.

The researchers gave a lifestyle questionnaire to 231,048 Australians aged 45 years or older who were participating in the Sax Institute’s 45 and up study. The participants were asked to score six health behaviours.

The 6 deadly behaviours are

  • Alcohol consumption
  • Poor diet
  • Inactivity
  • Smoking
  • Spending more than seven hours a day sitting down
  • Sleeping for more than nine hours

Over 90% of the participants had one of the 30 most commonly occurring risk factors and combinations including physical inactivity, sedentary behaviour, and/or long sleep duration. Combinations involving smoking and high alcohol consumption were more highly associated with all-cause mortality.

Dr Melody Ding, one of the study authors, told ABC Radio: “The most intriguing was the 44% risk increase of those who are sleeping more than 9 hours a week. When you combine too much sleep with physical inactivity… then you find the risk for death has increased 149%.

“People who are sleeping too much, sitting a lot and also not being physically active then you’re looking at a combined risk increase of four times.”

Related: MJA – Improving access and equity in reducing cardiovascular risk: the Queensland Health model

Another author, Associate Professor Emmanuel Stamatakis told Fairfax Media: “One of the possible explanations is ‘reverse causality’. Long sleeping times could be indicative of an underlying, undiagnosed disease.”

However he also said the way the survey was written could be a possible explanation:  “In the survey, people were asked ‘How long did you sleep?’ This most likely elicits an answer to the question: ‘How long were you in bed?’

“This says nothing about the quality of the sleep,” Dr Stamatakis said. “So, reported long sleep duration could in fact be indicative of fragmented, restless and poor-quality sleep.”

Related: MJA – Preventing type 2 diabetes: scaling up to create a prevention system

The results founds a person who has all six bad habits is more than five times as likely to die during a six-year period as one who is very clean-living.

Interestingly, high alcohol on its own was the least risky behaviour, with just an 8% increased mortality.

Dr Stamatakis said this shouldn’t give people “licence to drink”.

“General population studies show exactly the opposite result. These show that harmful effects from alcohol start from moderate consumption levels,” he said.

Latest news:

[Comment] Primary health care and the Sustainable Development Goals

After the eight Millennium Development Goals that have shaped progress in the past 15 years, 17 Sustainable Development Goals (SDGs) were adopted by governments at the UN General Assembly in September, 2015. SDG3 explicitly relates to health—to “Ensure healthy lives and promote well-being for all at all ages”. This goal is translated into 13 targets: three relate to reproductive and child health; three to communicable diseases, non-communicable diseases, and addiction; two to environmental health; and one to achieving universal health coverage (UHC).

[Comment] Governance for planetary health and sustainable development

The landmark report of The Rockefeller Foundation–Lancet Commission on Planetary Health 1 is a clear and compelling articulation of the inextricable link between human health and environmental change. The report explores an array of complex, interlinked elements of concern, from environmental tipping points to the impacts of invasive species and the importance of protected areas. The United Nations Development Programme (UNDP) recognises planetary health as critical to achieving sustainable development across the economic, social, and environmental spheres—this ethos underpins our Strategic Plan for 2014–17.