×

Direct-to-consumer genetic testing — where should we focus the policy debate?

What are the implications for health systems, children and informed public debate?

Until recently, human genetic tests were usually performed in clinical genetics centres. In this context, tests are provided under specific protocols that often include medical supervision, counselling and quality assurance schemes that assess the value of the genetic testing services. Direct-to-consumer (DTC) genetic testing companies operate outside such schemes, as noted by Trent in this issue of the Journal.1 While the uptake of DTC genetic testing has been relatively modest, the number of DTC genetic testing services continues to grow.2 Although the market continues to evolve,3 it seems likely that the DTC genetic testing industry is here to stay.

This reality has led to calls for regulation, with some jurisdictions going so far as to ban public access to genetic tests outside the clinical setting.4,5 In Australia, as Nicol and Hagger observe, the regulatory situation is still ambiguous;6 regulation is further complicated by the activity of internet-accessible companies that lie outside Australia’s jurisdiction. In general, the numerous policy documents that have emanated from governments and scientific and professional organisations cast DTC services in a negative light, seeing more harms than benefits, and, in some jurisdictions, governments have tried to regulate their services and products accordingly.7,8 Policy debates have focused on the possibility that DTC tests could lead to anxiety and inappropriate health decisions due to misinterpretation of the results. But are these concerns justified? Might they be driven by the hype that has surrounded the field of genetics in general. If so, what policy measures are actually needed and appropriate?

Time for a hype-free assessment of the issues?

Driven in part by the scientific excitement associated with the Human Genome Project, high expectations and a degree of popular culture hype have attracted both public research funds and venture capital to support the development of disease risk-prediction tests.3 This hype — which, to be fair, is created by a range of complex social and commercial forces9 — likely contributed to both the initial interest in the clinical potential of genetic testing and the initial concerns about possible harms. Both are tied to the perceived — and largely exaggerated — predictive power of genetic risk information, especially in the context of common diseases. There are numerous ironies to this state of affairs, including the fact that the call for tight regulation of genetic testing services may have been the result, at least in part, of the hype created by the both the research community and the private sector around the utility of genetic technologies.9 This enthusiasm helped to create a perception that genetic information is unique, powerful and highly sensitive, and specifically that, as a result, the genetic testing market warrants careful oversight.

Now that research on both the impact and utility of genetic information is starting to emerge, a more dispassionate assessment can be made about risks and the need for regulation. Are the concerns commonly found in policy reports justified? Where should we direct our policymaking energy?

It may be true that consumers of genetic information — and, for that matter, physicians — have difficulty understanding probabilistic risk information. However, the currently available evidence does not show that the information received from DTC companies causes significant individual harm, such as increased anxiety or worry.10,11 In addition, there is little empirical support for the idea that genetic susceptibility information results in unhealthy behavioural changes (eg, the adoption of a fatalistic attitude).5

The concerns about consumer anxiety and unhealthy behaviour change have driven much of the policy discussion surrounding DTC testing. As such, the research could be interpreted as suggesting that there is no need for regulation or further ethical analysis. This is not the case. We suggest that the emerging research invites us to focus our policy attention on issues that reach beyond the potential harms to the individual adult consumer — where, one could argue, there seems to be little empirical evidence to support the idea that the individual choice to use DTC testing should be curtailed — to consideration of the implications of DTC testing for health systems, children and informed public debate.

Health system costs

Although genetic testing is often promoted as a way of making health care more efficient and effective by enabling personalised medical treatment, it has been suggested that the growth in genetic testing will increase health system costs. A recent survey of 1254 United States physicians reported that 56% believed new genetic tests will increase overall health care spending.12

Will DTC testing exacerbate these health system issues by increasing costs and, perhaps, the incidence of iatrogenic injuries due to unnecessary follow-up? This seems a reasonable concern given that studies have consistently shown that DTC consumers view the provided data as health information that should be brought to a physician for interpretation. One study, for example, found that 87% of the general public would seek more information about test results from their doctor.13 The degree to which these stated intentions translate into actual physician visits is unclear. But for health systems striving to contain costs, even a small increase in use is a potential health policy issue, particularly given the questionable clinical utility of most tests offered by DTC companies. It seems likely that there will be an increase in costs with limited offsetting health benefits — although more research is needed on both these possible outcomes.

Compounding the health system concerns is the fact that few primary care physicians are equipped to respond to inquiries about DTC tests. A recent US study found that only 38% of the surveyed physicians were aware of DTC testing and even fewer (15%) felt prepared to answer questions.14 As Trent notes, even specialists can encounter difficulties in interpreting DTC genetic tests.1 This raises interesting questions about how primary care physicians will react to DTC test results. Will they, for example, order unnecessary follow-up tests or referrals, thus amplifying the concerns about the impact of DTC testing on costs?

Testing of children

While there is currently little evidence of harm caused by DTC genetic testing, most of the research has been done in the context of the adult population. The issues associated with the testing of minors are more complicated, involving children’s individual autonomy and their right to control information about themselves. Many DTC genetic testing companies include tests for adult-onset diseases or carrier status. Testing children for such traits contravenes professional guidelines. Nevertheless, research indicates that only a few DTC companies have addressed this concern. A study of 29 DTC companies found that 13 did not have policies on the issue and eight allowed testing if requested by a parent.15 While it is hard to prevent parents from submitting samples from minors to genetic testing companies, this calls for an important policy debate on whether there are limits on parental rights to access the genetic information of their children. Current paediatric genetic guidelines recommend delaying testing in minors unless it is in their best interests, but these are not enforceable and not actively monitored.16

In addition, unique policy challenges remain with regard to the submission of DNA samples in a DTC setting. It is difficult for DTC companies to check whether the sample received is from the person claiming to be the sample donor. Policymakers should consider strategies, such as sanctions, that eliminate the ordering of tests without the consent of the tested person.

Truth in advertising

The DTC industry is largely based on reaching consumers via the internet. Research has shown that the company websites — which, in many ways, represent the face of the industry — contain a range of untrue or exaggerated claims of value.17 Advertisements for tests that have no or limited clinical value have a higher risk of misleading consumers, because the claims needed to promote these services are likely to be exaggerated. It is no surprise that stopping the dissemination of false or misleading statements about the predictive power of genetics has emerged as one of the most agreed policy priorities.8 While evidence of actual harm caused by this trend is far from robust, it is hard to argue against the development of policies that encourage truth in advertising and the promotion of more informed consumers. Moreover, the claims found on these websites may add to the general misinformation about value and risks associated with genetic information that now permeates popular culture. Taking steps to correct this phenomenon is likely to help public debate and policy deliberations. For example, this might include a coordinated and international push by national consumer protection agencies to ensure that, at a minimum, the information provided by DTC companies is accurate.18

Conclusion

These are not the only social and ethical issues associated with DTC genetic testing. Others, like the use of DTC data for research and the implications of cheap whole genome sequencing, also need to be considered. But they stand as examples of issues worthy of immediate policy attention, regardless of what the evidence says about a lack of harm to individual adult users. We must seek policies that, on the one hand, allow legitimate commercial development in genomics and, on the other, achieve appropriate and evidence-based consumer protection. In finding this balance, we should not be distracted by hype or unsupported assertions of either harm or benefit.

Damien John Jolley MSc(Stat), MSc(Epidemiol), BSc(Hons), DipEd, AStat

Damien Jolley was born in Melbourne on 23 March 1954. He graduated from the University of Melbourne in 1976 with a combined Bachelor of Science (Honours) and Diploma of Education. After a short period teaching mathematics, he moved to Papua New Guinea with Australian Volunteers International and began his epidemiological studies. Five years later, he returned to Melbourne and completed a Master of Science in statistics while working as a statistician at the Anti-Cancer Council of Victoria.

Damien was one of the first recipients of a VicHealth Public Health Research Fellowship in 1989. He was then appointed Research Fellow at the London School of Hygiene and Tropical Medicine, where he gained a Master of Science in epidemiology with distinction in 1991.

He moved into academia in 1995, initially as a Principal Research Fellow at the University of Melbourne in its Department of Public Health, to pursue his passion for medical research, statistical analysis and lecturing. In 1999, he joined the Faculty of Health and Behavioural Science at Deakin University, where he was appointed Associate Professor then Associate Dean (Teaching and Learning).

In 2005, Damien moved to the Monash Institute of Health Services Research, where has was appointed Interim Director in 2007. His final position was Associate Professor in Biostatistics at the School of Public Health and Preventive Medicine at Monash University from 2007.

Throughout his career, Damien worked as an independent consultant to various national and international scientific and health organisations. He was committed to research, and his obsession for good methodology brought rigour to many studies. This was demonstrated by the wide variety of projects that sought his help, his numerous conference presentations and high-impact publications, and the many grants and awards that he received. These included the 2004 MJA/Wyeth Award for the best clinical research paper published in the MJA, the Volvo Award for low back pain research in 2001, and the prestigious Sir Richard Stawell Memorial Prize from the Australian Medical Association (Victoria) in 2012. His passion for teaching brought a common-sense element to the field of statistics; and the number of awards to his students is a tribute to the quality of his supervisory and mentoring capability.

Damien will be remembered as one of Australia’s best known and most eminent biostatisticians and for his scrupulous intellect and sense of humour, which he maintained throughout his courageous battle with metastatic melanoma. Damien passed away on 15 February 2013 and is survived by his wife Jenny and their children Max and Alex.

Increasing incidence of hospitalisation for sport-related concussion in Victoria, Australia

Globally, traumatic brain injury (TBI) is the leading cause of death and disability in children and adults and is involved in nearly half of all trauma deaths.1 In Australia, Europe and the United States, the estimated annual incidence of TBI requiring hospitalisation is 60–250 per 100 000 population, with 80%–90% of cases categorised as mild TBI.2 For some young adults in the US, the annual incidence of emergency department presentations for TBI is reportedly as high as 760 per 100 000 population.3 In Australia, limited population data are available, but one report estimated the direct hospital costs for all TBI in the 2004–05 financial year at $184 million.4

A subset of mild TBI is concussion, reflecting a complex pathophysiological process resulting from trauma to the brain. Common symptoms include headache, amnesia, confusion, blurred vision, dizziness, nausea, balance problems and fatigue. Loss of consciousness is reported in 10%–20% of cases. Most concussions resolve within a few days to weeks, but in some cases the symptoms can be prolonged.57 The estimated incidence of concussion varies between countries, but US census data suggest about 600 cases per 100 000 population.3 In 2004–05, of the 22 710 Australia-wide hospital separations coded as TBI using the International statistical classification of diseases and related health problems, 10th revision, Australian modification (ICD-10-AM),8 16.9% were concussions with no loss of consciousness and 64.2% were other TBI with some loss of consciousness.4

Injuries during sport account for about a fifth of all TBI cases,2 but as sport is included under several injury mechanism categories in routinely collected data, the precise incidence is unknown. This is particularly so in Australia, where there is no routine monitoring or reporting of sport-related concussion. Data using ICD-9 coding suggest there are 1.6–3.8 million hospital presentations for sports and recreation-related head injury in the US annually.2 As sport-related concussions are not usually reported to doctors, this could underestimate the true incidence by a factor of six to 10.9

As the incidence of sport-related concussion in Australia, especially at the population level, is unknown, we aimed to enumerate trends in the incidence of hospitalisation for sport-related concussion in Victoria over a 9-year period, along with estimating the total hospital costs. A secondary aim was to identify the specific sports with the highest frequency of hospitalisation for concussion and to examine trends in these sports over the same period. It is hoped that this information will be used to identify priority groups for population targeting of concussion prevention and management strategies.

Methods

The Victorian Injury Surveillance Unit is the repository for de-identified injury surveillance data in Victoria, outsourced by the Victorian Department of Health. Hospitalisation data relating to all admissions to public and private hospitals in Victoria for the 9 financial years from 2002–03 to 2010–11 were extracted from the Victorian Admitted Episodes Dataset (VAED). Data in the VAED are coded to the ICD-10-AM.8 We chose July 2002 as the start date to coincide with the introduction of the third edition of the ICD-10-AM, which introduced more than 200 codes for sporting activities.

From the VAED data, we selected patients aged ≥ 15 years who had all of the following, as indicated by ICD-10-AM codes:

  • a principal diagnosis recorded as an injury (S00–T98)

  • a concussive injury recorded anywhere in the diagnosis codes (S06.00–S06.05)

  • an unintentional external cause code (V00–X59)

  • a sport or exercise activity code at the time of the incident (U50–U71).

We restricted our analysis to people aged ≥ 15 years because appropriate published sports participation data were only available for this group. To minimise multiple counting of patients, transfers within and between hospitals and identifiable re-admissions to the same hospital within 30 days were excluded.

Ethics approval for the study was obtained from the Human Research Ethics Committee at the Victorian Department of Health.

Hospitalisation costs

Each hospitalisation is assigned an Australian Refined Diagnosis Related Group (AR-DRG), which provides a clinically meaningful way of relating the types of hospital patients to the resources required by the hospital to treat them. The National Hospital Cost Data Collection produces average public hospital costs for each AR-DRG, by state. We assigned the Victorian average costs to each hospitalisation for concussion according to its AR-DRG.

Concussion rates and participation-adjusted trends

We obtained sports participation numbers from the annual Exercise, Recreation and Sport Survey (ERASS) for each year from 2002 to 2010.10 The ERASS is a joint initiative of the Australian Sports Commission and state and territory departments of sport and recreation. It collects information on the frequency, duration, nature and type of activities that people aged ≥ 15 years participated in for exercise, recreation or sport during the previous 12 months. We calculated the annual number and rate of hospitalisations for sport-related concussion both overall and for individual sports, using the participation data as the denominator.

Trends in the annual rate of hospitalisation for sport-related concussion per 100 000 participants aged ≥ 15 years were computed using a log-linear regression model of the rate data, assuming a negative binomial distribution of cases. Statistics relating to the trend curves (ie, the slope and intercept) and their associated P values were calculated using the regression model in SAS version 9.1 (SAS Institute). A trend was considered statistically significant if the P of the regression slope was < 0.05.

Results

Of the 28 718 hospitalisations for concussion in people aged ≥ 15 years in Victoria over the 9-year period, 4745 (16.5%) were for sport-related concussion. The estimated total hospital cost of hospitalisations for sport-related concussion over the 9 years was $17 944 799 ($1 993 867 per year), with a median cost per admission of $1583 (range, $631 to $190 190). Although the average hospital costs used to calculate these figures are specific to the public sector, only 94 hospitalisations (2%) were in private hospitals.

The annual number of hospitalisations for sport-related concussion increased significantly by 60.5% (95% CI, 41.7%–77.3%; P < 0.001), from 443 in 2002–03 to 621 in 2010–11 (Box 1), corresponding to an average annual increase of 5.4% (95% CI, 4.0%–6.6%; P < 0.001). Over the same period, the rate of hospitalisation for concussion injury per 100 000 participants also increased significantly, by 38.9% (95% CI, 17.5%–61.7%; P < 0.001) (Box 1), an average annual increase of 3.7% (95% CI, 1.8%–5.5%; P < 0.001). Thus, just over a third of the increase in hospitalisation can be explained by increases in reported sports participation.

The most common activities leading to hospitalisation (defined as accounting for > 50 hospitalisations over the 9-year period) were: team ball sports (particularly the football codes), modes of active transportation, and snow-based adventure sports (Box 2). Together, the football codes accounted for 36.0% (1709/4745) of all hospitalisations. The activities with the highest mean participation-adjusted rates of hospitalisation for concussion over the 9-year period were motor sports (181.8 per 100 000 participants), equestrian activities (130.3/100 000) and Australian football (80.3/100 000) (Box 2).

Cricket was the only activity with a decrease in the rate of hospitalisation for sport-related concussion, but it was not statistically significant. The rate of hospitalisation for concussion injury increased in all other major sports and recreational activity groups, although the increases were only significant for roller sports, rugby, cycling and soccer.

Discussion

Our findings show that the number of patients with ICD-10-AM-coded sport-related concussion admitted to hospital in Victoria increased by 61% over the period 2002–03 to 2010–11. Much of this increase was independent of changes in sports participation, because the participation-adjusted hospitalisation rate also increased significantly by 39% over this period.

Head injuries sustained during participation in sport place a significant burden on the health care systems needed to assess and treat them; the sport delivery systems responsible for providing safe sporting opportunities; and the individuals who sustain them. The short- and long-term consequences of concussion mean that potentially years of productive life are lost, and there are substantial economic costs for individuals, families and society.

In the US, substantial economic and societal effects of TBI have been reported, with an estimated 1.7 million people sustaining a TBI annually, associated with 1.4 million emergency room visits and 275 000 hospitalisations.11 In the sporting context, the number of concussions reported to the National Collegiate Athletic Association increased significantly by 7% annually from the 1988–89 to 2003–04 US academic years.12 These data reflect a wide range of collegiate sports participation, and the increasing trends could reflect increasing concussion awareness, changes in player risk, or changes in data collection methodology.13,14 Increasing injury incidence seems not to be confined to Victoria and may reflect a longer-term increasing incidence of injury globally.

The reasons for the increasing trends of hospitalisation for concussion in our study are unknown. They could reflect, at least partially, changing health delivery system factors such as improved detection of concussion and its diagnosis in the hospital setting. Some of the increase could be related to greater awareness of concussion leading to more people presenting to hospitals for treatment. However, as the data here relate only to patients who were hospitalised, this presentation bias is expected to be minimal over time. It is not known if patients with concussion treated in emergency departments (but not admitted), by general practitioners or by sports medicine practitioners follow the same trends.

Sports delivery factors could also contribute to the increasing trends. For example, increased competitiveness may lead to faster game speeds and higher-intensity activity, and hence increased forces of collision that lead to concussive impacts. As athletes get bigger, stronger and faster, and there are attempts to increase game speeds for spectator enjoyment, it is logical that the forces associated with collisions would also increase in magnitude. There is currently no effective headgear that prevents concussions, so more collisions with more force would be expected to increase the number of concussions. Other ways to prevent concussion are rule changes and game modifications, better education of coaches and players, and adherence to return-to-play guidelines after a previous head injury.

Although the football codes are often thought to be the source of most sport-related concussion,15 case-series reports of ICD-10-AM-coded hospitalisations for injury have found that concussion is also highly associated with equestrian activities, soccer, cricket, netball and martial arts.4,16 Nonetheless, based on a comparison of results from epidemiological studies of team sports with variable definitions of concussion, Australia’s high-participation football codes may be associated with a concussion risk 10–15 times higher than that for American football.1719 This difference likely reflects differences in injury mechanisms, speeds at which the games are played, and the personal protective equipment used. In our study, football accounted for just over a third of all concussion cases. Over the 9-year period, the participation-adjusted rates of hospitalisation increased significantly by 77% in soccer and by 96% in rugby, but non-significantly by 8% in Australian football, perhaps reflecting greater attention to concussion prevention in the latter than in other football codes. These increases support priority attention being given to preventing concussion across all football codes.

Our data also show that sport-related concussion is not just a problem for football. In fact, the highest participation-adjusted rates of hospitalisation were for motor sports and equestrian activities, where higher-impact speeds and greater fall distances are more likely to lead to concussion. The greatest increase over the 9-year period was in roller sports; the reasons for this are unclear. These findings confirm the US experience of concussion occurring in many sports,14,20 and emphasises that a focus on football codes alone would not prevent most concussions.

Our study has some limitations. It assumes that the diagnosis of concussion has been made correctly and that all cases have been identified by the ICD-10-AM coding. At least one study has shown misdiagnosis of head injury in emergency departments,21 and if this also occurs for hospital admissions, then the rates in our study would under-enumerate the true rate of hospitalisation for sport-related concussion. In any case, the number of sport-related concussions is almost certainly underestimated because the only way to identify a sport in the ICD-10-AM is through the activity codes, which are known to be underused.22 Also, the sports participation data do not take into account the frequency of participation, and it may be that people who play more sport at higher intensities or more regularly are the people most at risk of concussion.

While only relating directly to one state, our data clearly demonstrate that sport-related concussion is a significant and increasing public health burden. Our findings are an underestimate of the true burden of sport-related concussion because: a) they are only based on patients requiring hospitalisation, and most patients with concussion would not require hospitalisation; b) the number of hospitalisations is almost certainly an underestimate because of limitations in the available data; c) they only include direct hospital costs, and the indirect morbidity costs of injury are known to be at least as much as these; and d) there is no assessment of any adverse long-term health outcomes. Even when injured players sustain loss of consciousness, most do not attend a hospital for treatment. If the underreporting of incident cases was up to 10-fold,9 the real number of sport-related concussions could be 47 450 over the 9-year period, potentially costing the Victorian community about $20 million annually.

The most recent international consensus statement on the management of concussion in sport strongly argued that there is an immediate need to develop guidelines, education resources and other health promotion approaches for preventing head injury and its adverse outcomes across all sports with a risk of serious head injury.7 Anecdotally, there are high levels of public concern about the risk of head injury in sport, and public misinformation about its assessment, management and prevention.23,24 Our findings that a large number of head injuries occur annually and that trends in participation-adjusted incidence rates are increasing make the prevention of head injury in sport a population health priority.

1 Trends in the frequency and rate of hospitalisation per 100 000 participants aged ≥ 15 years for sport-related concussion,* Victoria, 2002–03 to 2010–11


* Dotted lines show trends.

2 Frequency, mean rate per 100 000 participants, and trend of hospitalisation of people aged ≥ 15 years for sport-related concussion, Victoria, 2002–03 to 2010–11

Sports activity*

Frequency

Mean rate over 9-year period

Estimated annual % change in rate

Estimated % change in rate over 9-year period

P for trend


Australian football§

1442

80.3

0.9%

8.2%

0.74

Cycling

766

17.6

5.5%

62.0%

0.004

Motor sports

674

181.8

0.5%

4.6%

0.86

Equestrian activities

504

130.3

1.0%

9.0%

0.79

Soccer

198

9.7

6.5%

76.6%

0.007

Ice and snow sports

184

31.2

1.9%

18.9%

0.70

Basketball

115

6.9

2.6%

26.3%

0.70

Roller sports

105

44.8

20.2%

424.8%

< 0.001

Rugby (all codes)**

69

49.9

10.1%

95.8%

< 0.001

Netball

55

3.6

8.0%

100.5%

0.31

Cricket

51

3.0

4.2%

31.8%

0.53

All others

582

3.8

6.7%

80.0%

0.003

Total

4745

15.6

3.7%

38.9%

< 0.001


* The specific sports activities listed are those with > 50 hospitalisations for concussion in total over the 9-year period. Per 100 000 participants aged ≥ 15 years. For the trend analyses, yearly rates were computed using published participation denominator data for each year. § All “football unspecified” cases (n = 254) were included in the “Australian football” category on the presumption that patients playing soccer or rugby would specify they were playing these codes in Victoria. This may not be so for all players, and there may be some misclassification. Roller sports includes inline skating, rollerskating, skateboarding and non-motor scooter riding. ** Data for all rugby codes combined cover only 7 years (2004–05 to 2010–11 financial years) as no published participation data were available for the first 2 years (there were an additional 13 hospitalisations for rugby-related concussion that occurred during 2002–03 and 2003–04).

Uptake of influenza vaccine by pregnant women: a cross-sectional survey

Pregnant women with influenza have an increased risk of complications, including hospitalisation, intensive care unit admission, preterm delivery and, in severe cases, death.13

A growing body of evidence supports the safety and effectiveness of inactivated influenza vaccine during pregnancy. A recent review concluded that influenza vaccine is safe to administer during any trimester.4 Two recent randomised controlled trials found that babies born to vaccinated mothers had a reduced risk of contracting influenza in the first 6 months of life.5,6 The 9th edition of the Australian immunisation handbook recommends influenza vaccine for all pregnant women who will be in their second or third trimester during influenza season, although it can be given in any trimester.7 The vaccine is free for all pregnant women.

Uptake of influenza vaccine by pregnant women in Australia is low, with estimates ranging from about 7% to 40%.811 However, these estimates are often from relatively small samples at single sites dependent on local vaccination policies and procedures.

Our aims were to determine the uptake of seasonal influenza vaccine among a larger sample of pregnant women residing in New South Wales, and to identify barriers and facilitators to vaccine uptake in pregnancy.

Methods

Survey development

We used a self-administered questionnaire delivered to pregnant women attending public hospitals in NSW. The survey was based on the Health Belief Model and Precaution Adoption Process Model of health behaviour.12 Questions covered self-reported receipt of influenza vaccine during the current pregnancy, demographic characteristics, general attitudes toward vaccination, perception of disease risk and vaccine risk and benefit during pregnancy, and information sources. Face and content validity and internal consistency were examined through a pilot study. The final questionnaire was translated into Arabic and Chinese.

Sample size and recruitment

A non-random stratified sampling plan was used to ensure a representative sample of pregnant women in NSW. Pilot data showed 15% vaccine uptake, and a target sample of 783 was calculated to provide a 95% confidence interval within 15% of the point estimate. Data on women who had given birth in NSW between 2004 and 2008 were obtained (J Bentley, Principal Epidemiologist, Health Services, Centre for Epidemiology and Evidence, NSW Ministry of Health, personal communication, 2010) and stratified by age, parity and region of residence. Using these population data, target sample proportions were calculated for each stratum.

Women were recruited from antenatal clinic waiting rooms of three tertiary hospitals and one Aboriginal community-controlled health service (ACCHS). The sites were: a hospital in metropolitan Sydney (Site A), with about 5300 births per year; a hospital in Sydney’s outer suburbs (Site B), with 4200 births per year; and a rural referral hospital (Site C), with 800 births per year. The ACCHS was associated with Site C. During the study, Sites A and B did not provide influenza vaccination for pregnant women; however, it had been offered at Site B in March–June 2011, before study commencement. During recruitment, Site C ran an 8-week influenza vaccination clinic onsite.

Recruitment took place between 27 July and 9 November 2011. Recruitment days were rotated to ensure all days of clinic operation were sampled. All women attending on these days were approached.

Ethics approval was gained from the human research ethics committee of each participating institution, and the NSW Aboriginal Health and Medical Research Council.

Data analysis

We used χ2 tests for differences in proportions and backward logistic regression analysis. Data were analysed using SPSS version 17.0 (IBM), and QuickCalcs (GraphPad Software).

Results

Participant characteristics

The overall response rate was 87% (815/939). Site-specific rates were: Site A, 88% (349/398); Site B, 79% (234/298); and Site C, 95% (232/243). The overall sample proportions for age and parity differed from the NSW population data, so the data were weighted for these variables. The weighted sample was comparable to women who gave birth in NSW between 2004 and 2008 for age, parity and region of residence. At the time of the survey, the participants had a mean gestation of 29 weeks (median, 30; range, 5–41), and 99% were > 12 weeks’ gestation.

Most women received their antenatal care exclusively through public hospital antenatal clinics (466/815, 57%). A quarter (201/815) received shared antenatal care through their general practitioner and the local public hospital, and small numbers received care through a birth centre, private obstetrician or the ACCHS.

Five per cent of women (37/815) identified as Aboriginal. Most (580/815, 71%) spoke English at home, but 46 other languages were spoken, most commonly Arabic, Cantonese or Mandarin, and Hindi. Nearly half the women (347/815, 43%) had completed a university degree or higher.

Of the 815 women, 255 (31%) reported an underlying condition that put them at higher risk of complications from influenza.

Vaccine uptake and associated factors

Overall, 215 of 786 women (27%, 95% CI, 24%–31%) had received influenza vaccination during their current pregnancy (Site A, 75/340 [22%]; Site B, 39/225 [17%]; Site C, 101/221 [46%]).

Of the 815 women, 324 (40%; 95% CI, 36%–43%) correctly believed influenza vaccination was recommended during pregnancy, while 207 (25%; 95% CI, 23%–29%) incorrectly thought it was not, and 276 (34%; 95% CI, 31%–37%) were unsure.

Multivariate analysis showed that women who had received a recommendation to have influenza vaccination while pregnant were 20.0 times (95% CI, 10.9–36.9; P < 0.01) more likely to have been vaccinated than women who had not received a recommendation. Other factors associated with vaccine uptake are presented in the Box.

Factors found not to be significantly associated with vaccine uptake included previous influenza infection, perceived likelihood of infection, knowledge of recommendations, belief that the vaccine would protect from influenza, concern that the vaccine would cause influenza, age, parity, antenatal care type, level of education, ethnicity, geographical area (rural v urban), and the presence of maternal comorbidities such as asthma, diabetes, obesity and hypertension.

Concern about the safety of the vaccine for the baby was negatively associated with vaccination (Box). However, of the 502 women who expressed concern, 339 (68%; 95% CI, 63%–71%) agreed they would have the vaccine if their doctor or midwife recommended it.

Of the 310 women who reported from whom they had received a recommendation to have influenza vaccination, 160 (52%; 95% CI, 46%–57%) received it from their doctor and 35 (11%; 95% CI, 8%–15%) from a midwife. Other sources of recommendation included antenatal clinic staff such as receptionists (30; 10%; 95% CI, 7%–14%) and family members (22; 7%; 95% CI, 5%–11%).

Women reporting an underlying condition that put them at higher risk of complications from influenza were no more likely to have received the vaccine than women not reporting this (χ= 2.02; P = 0.16) and were no more likely to have received a recommendation to do so (χ= 0.02; P = 0.88).

Discussion

Our results show the importance of health care provider recommendation in pregnant women’s willingness to receive influenza vaccination. Vaccine uptake among women in this sample was relatively low (27%), with significant variation between study sites.

This study has some limitations. First, few women in our sample received antenatal care through private obstetric providers. In NSW, about 26% of women seek antenatal care from a private obstetrician or midwife.13 Our sample can therefore be considered representative of the public obstetric care population only.

Second, our data on uptake relied only on self-report. Self-report has been identified as an acceptable proxy to medical record audit for determining vaccine uptake in older adults.14,15 We anticipate pregnant women’s recall to be equal or better, given that they were unlikely to have received another vaccine while pregnant in 2011.

Third, the data are cross-sectional and although we were able to identify associations between vaccine uptake and certain study factors, we cannot confirm these associations as causal. However, the findings concur with other studies that found health care provider recommendation, safety perceptions and access to vaccines are major factors in vaccine uptake.1618

Our findings suggest that women’s concerns about the safety of the vaccine for their unborn child can be overcome by health care provider recommendation. Although women who were concerned about their baby’s safety were less likely to be vaccinated, 68% of them agreed that they would have the vaccine if their doctor or midwife recommended it.

Given that a minority of women surveyed, including those at risk due to underlying conditions, had received a vaccination recommendation, it is important to consider what would increase recommendations from health care providers. While some studies have found that physicians are aware of current recommendations,19 others report confusion among health care providers about contraindications and vaccine safety.2022 These findings highlight the need for professional education and support for antenatal care providers.

Vaccine availability at the antenatal clinic was an apparent contributor to uptake. Site C, which had an onsite vaccination nurse at the time of the study and staff members who discussed the recommendations with women in the waiting room, had a 46% uptake. Sites A and B, which had significantly lower uptake, had no such programs during the study period. This suggests that easily accessible vaccine is likely to be important, but other contributing factors cannot be ruled out.

Uptake by women who felt it was easy to access the doctor for vaccination was not significantly different to uptake by women who felt access was difficult. One explanation may be that women attending Site C (29% of the study sample), who live in a rural setting where access to a primary care doctor is comparatively difficult, had an alternative method of accessing vaccination through the clinic.

Our results suggest that provision of information about influenza vaccination for pregnant women will only partially overcome the low uptake in this group. Motivation and education of antenatal care providers is also important. Information for pregnant women and providers, coupled with easily accessible vaccine, have the potential to substantially increase maternal influenza vaccination coverage.

Weighted percentage responses and adjusted odds ratios (AORs) for influenza vaccine uptake by pregnant women, by associated study factors

Factor

Women who
had vaccine*

Women who did not have vaccine*

AOR (95% CI)

P


Study site

0.04

Site B

37 (17%)

186 (83%)

1.0

Site A

71 (21%)

264 (79%)

1.4 (0.3–2.8)

Site C

103 (46%)

119 (54%)

2.4 (1.2–4.8)

Perceived severity of the consequences of influenza infection during pregnancy

0.01

Mild

43 (21%)

166 (79%)

1.0

Neither mild nor severe

36 (19%)

155 (81%)

0.9 (0.4–2.0)

Severe

131 (36%)

228 (64%)

2.2 (1.2–4.1)

Overall feelings toward influenza vaccination during pregnancy

< 0.01

Oppose

14 (6%)

231 (94%)

1.0

Neither oppose nor support

20 (9%)

197 (91%)

2.1 (0.8–5.3)

Support

179 (57%)

133 (43%)

7.6 (3.2–17.9)

Concerned about baby’s safety if having influenza vaccine during pregnancy

0.04

Disagree

113 (60%)

75 (40%)

1.0

Neither disagree nor agree

43 (39%)

66 (61%)

0.8 (0.4–1.7)

Agree

57 (12%)

426 (88%)

0.5 (0.2–0.9)

Would have influenza vaccine while pregnant if GP recommended it

< 0.01

Disagree

4 (3%)

143 (97%)

1.0

Neither disagree nor agree

8 (8%)

93 (92%)

1.9 (0.4–8.2)

Agree

200 (38%)

333 (62%)

7.9 (2.4–26.3)

It is difficult to get to the doctor to have influenza vaccine while pregnant

0.01

Agree

25 (27%)

68 (73%)

1.0

Disagree

165 (36%)

297 (64%)

1.0 (0.4–2.1)

Neither disagree nor agree

22 (10%)

199 (90%)

0.3 (0.1–0.9)

Received recommendation to have influenza vaccine during this pregnancy

< 0.01

No

19 (4%)

432 (96%)

1.0

Yes

193 (59%)

136 (41%)

20.0 (10.9–36.9)


GP = general practitioner. * Weighted values. Percentages are of total respondents in each row. Referent category.

Comparative effectiveness research — the missing link in evidence-informed clinical medicine and health care policy making

To change practice, we should move beyond trial-based efficacy to real-world effectiveness

Meaningful health care reform requires robust evidence about which interventions work best for whom and under what circumstances. The Institute of Medicine in the United States has estimated that less than 50% of current treatments are supported by evidence and 30% of health care expenditure reflects care that is of uncertain value.1 In studies testing established clinical standards of care, more than half reported evidence that contradicts standard care or is inconclusive.2 Many Medicare Benefits Schedule services lack comprehensive evidence of comparative safety or effectiveness, while many that have been evaluated have been shown to be ineffective, harmful or of uncertain value compared with alternative forms of care.3

Filling the void — the rise of comparative effectiveness research

Comparative effectiveness research (CER) compares new
or existing interventions (or a new dose or intensity of an intervention) to one or more non-placebo alternatives, which may include “usual care”. It can be used to evaluate
a broad spectrum of clinical interventions, including diagnostic tests or strategies, screening programs, surgical procedures, pharmaceuticals, prostheses and medical devices, quality and safety improvement interventions, behavioural change and prevention strategies, and care delivery systems. While CER is not a new process — many past trials have compared different interventions — it represents a new focus and consolidation of approaches
in clinical and health services research.

In 2009, the US Congress authorised $1.1 billion for CER and, in 2010, the Patient-Centered Outcomes Research Institute was established to identify CER priorities and develop appropriate methodologies. In the United Kingdom, the National Institute for Health Research (which was established in 2006) commissions and disseminates CER that informs clinical decision making, overseen by
the National Institute for Health and Clinical Excellence. However, in Australia, there is no comparable group or agency with CER as the prime focus of activity.

For CER to realise its full potential, the research community must accommodate four prerequisites in
the following order.

1. Involvement of all relevant stakeholders in setting the research agenda

Research has often lacked meaningful engagement of health care providers and patients in the choice of research questions and design and implementation of the research effort. Researchers and consumers of research must collaboratively identify important unanswered questions among current systematic reviews and clinical guidelines.4 Questions should be selected for CER on the basis of: the perceived needs of key stakeholders (clinicians, patients and health care managers); factors related to potential impact (eg, disease burden, cost of care and variation in outcomes); paucity of effectiveness data among specific populations; and emerging concerns about undisclosed harm.4 In the US, the Agency for Healthcare Research
and Quality has developed iterative and transparent methods for defining and prioritising future research
needs that involve a wide spectrum of stakeholders
(http://www.effectivehealthcare.ahrq.gov). Quantitative modelling methods which calculate the potential value of information in filling existing gaps in knowledge can also assist in prioritisation.5 The Institute of Medicine has issued an initial list of 100 national CER priorities derived by consensus, which includes patient-level and health system-level interventions.6

2. Flexible approach to evidentiary standards

To be useful, CER must use the best possible data sources and methods to provide credible, timely and relevant evidence. The analytic scope of CER includes reanalysing existing data from available studies (in the form of systematic reviews, meta-analyses or decision analyses)
or, if these fail to provide answers, generating additional data from new studies.

The aim of CER is to determine intervention benefit among unselected patients in real-world practice settings (ie, measure effectiveness), as opposed to doing so among highly selected patients in tightly controlled experiments (ie, measure efficacy). The design and conduct of CER studies must reflect this aim (Box).7

CER encounters the vexed question regarding the relative clinical utility of observational studies versus experimental trials. Randomised controlled trials (RCTs) have high internal validity, but narrow patient selection criteria limit their generalisability. Observational studies use data on care delivered routinely to unselected populations in various settings, but their results are
more vulnerable to confounding and bias owing to the absence of randomisation. The way forward for CER is
to encourage more large-scale, real-world RCTs (pragmatic trials) and more rigorous observational
studies (see Appendix).

In RCTs, the inclusion of as-treated and per-protocol analyses (in addition to intention-to-treat analyses) can help expose patient-specific differences in intervention uptake and response. More head-to-head RCTs that fairly compare appropriately administered alternative interventions are needed. Network (or mixed-treatment) meta-analysis enables direct and indirect comparisons of different treatments to be combined into one synthesis, which enables greater use of all available RCT evidence than traditional meta-analysis.

In observational studies, several design features improve rigour: prospective and standardised data collection, blinded outcome assessors, prespecified matching or stratification of patient groups, and analytic techniques that minimise confounding, such as risk-adjusted regression modelling and interrupted time series analysis. Multiple high-quality studies related to a single question which consistently show large intervention effects that persist after discounting all important sources of bias confer a high level of credibility. High-quality observational studies may fill evidence gaps more proficiently than RCTs in situations where:

  • technologies are rapidly evolving (ie, there are moving targets)

  • technologies cannot easily be randomised

  • no head-to-head trials exist

  • RCTs exclude certain types of patients or conditions

  • information is being sought about modification of treatment effects due to

    • variation in patient adherence and tolerance

    • use of concomitant treatments

    • dosing or intensity of treatments

    • selection or switching of treatments according to provider and patient preferences.

The choice of study design will depend on the context in which CER results will be applied, the necessary level of rigour according to the consequences of incorrect inferences from the sample studied (eg, potential harm or waste from introducing mass screening program versus small-scale niche therapy on the basis of invalid analyses), the feasibility and costs of different study designs, and the urgency of the need for evidence. The overarching goal is to describe methods that, if consistently applied, provide decisionmakers with a reasonable level of confidence that one intervention is more effective than or equally effective as another.

3. Investment in and redesign of research infrastructure

To fill evidence gaps quickly and definitively, CER will require substantially increased investment in current research infrastructure, both human and technical. This includes expanding existing and adding new research teams, developing new research methods and establishing collaborative research partnerships across multiple sites.
It also requires data linkage at the patient level (involving administrative databases and clinical registries for public and private patients), which enables more patients to be studied and facilitates better quality and diversity of studies. Such data linkage will require establishment of data standards and common vocabularies, unique patient identifiers, data quality control and privacy protection systems, and informatics grids which connect practice-based research networks.

Of even greater importance is tackling the current bottlenecks in clinical research: ethics approvals,
contract negotiations and incentives for organisations
to participate. Truly harmonised and rapidly responsive
ethics approval procedures across multiple jurisdictions, standardised contract language, and logistical and financial support for institutions to collect and share data are required.

In attracting researchers, CER also requires a dedicated funding stream for investigator-led CER, as is the case
with basic biomedical science research and RCTs that predominantly or solely measure efficacy.8 In 2011, the National Health and Medical Research Council spent less than 5% of its $800 million budget on CER, compared with 47% on biomedical research.9

4. Implementation of CER in changing clinical practice

Generating or revising clinical guidelines using CER results will, by itself, have minimal impact in changing clinical practice quickly. Implementation drivers that have greater impact include: redesign of care processes, professional roles and systems of care; financial incentives that reward better practice; performance reporting and feedback; health information technology and clinical decision support; mandates for shared decision making with patients; and better training of clinicians in CER and its application.10 The interdisciplinary field of implementation science, used to study successful diffusion of innovation, will become an important tool,11 aided by CER trials designed to simultaneously evaluate intervention effectiveness and optimal methods of implementation.12

The biggest implementation challenge is reconciling clinicians to important shifts in who delivers what care, and to whom, under different circumstances. The Medicare Benefits Schedule and Pharmaceutical Benefits Scheme will need to move towards greater investment in efficiently priced interventions that CER shows to be effective and disinvestment in interventions that are not. These reforms will require strong political endorsement, independent researchers, early and ongoing engagement with stakeholders around reimbursement decisions, and demonstrable commitment to evidence-informed best practice. However, CER should not be perceived as a means to substantially reduce overall health care spending. In the US, estimated cost savings from CER are less than 0.1% of total expenditure.13 Instead, the aim is to facilitate better return on investment. In fact, CER may lead to recommendations to adopt new interventions.

Current status of CER and its impact on
clinical practice

In a recent review of 231 CER studies (37% on drugs, 29% on behavioural interventions and 16% on procedures), only 35% favoured the new intervention; in contrast, 79% of 804 non-CER studies favoured the new intervention.14 More than 70% of the CER studies relied on non-commercial funding, but less than a quarter evaluated safety and cost.14

CER is informing health care policy and changing clinical practice in Australia and overseas. Australian researchers have reported sentinel CER trials comparing saline infusions with albumin infusions in intensive care15 and early dialysis with late dialysis in end-stage renal failure.16 In Norway, an entire colorectal cancer screening program has been set up as a series of adaptive randomised trials testing different screening tests and procedures.17

Conclusion

CER has the potential to reform health care and transform health care research. The research community needs to accommodate a greater emphasis on CER and address challenges regarding optimal methods for selecting stakeholders, prioritising research questions, selecting study designs that best answer the clinical question posed, determining funding and governance arrangements, and implementing CER findings into practice and policy making.

Elements of clinical and health services research that distinguish efficacy and effectiveness studies*

Study elements

Efficacy studies

Effectiveness studies


Intervention

Protocol strictly enforced; treatments masked; cross-overs discouraged

Highly flexible, as used in routine health care; treatments not masked; crossovers permitted

Patient population

High disease risk, highly compliant, few comorbidities

Anyone with the condition of interest

Study sites

Academic settings with well resourced research specialists

Routine clinical practice settings

Outcome measures

Often short-term surrogates or composite outcomes

Outcomes that are clinically relevant to patients, clinicians and health care managers

Duration of study

Often short (eg, several months to a year)

Often long

Intensity of monitoring

Intense

Intensity depends on condition of interest and practice setting

Data sources

Specific to trial

Various, including administrative databases

Data collection

Ceases when study is discontinued

Continues as part of routine care

Analysis

Typically intention to treat

Various, depends on study aims and design


* Adapted from Gartlehner et al.7 When the study design is a randomised controlled trial (RCT), efficacy studies are termed explanatory RCTs and effectiveness studies are termed pragmatic RCTs.

Forming networks for research: proposal for an Australian clinical trials alliance

A research network could improve outcomes through advocacy, identifying research gaps and providing shared infrastructure

Research benefits from both competition and collaboration. Inefficiencies occur when researchers are engaged in similar research, often not realising that other groups in Australia are working in the same area. For example, it is possible that there are competing clinical trials in uncommon cancers, which will decrease the chance of any individual study recruiting adequate numbers of patients to answer the questions it poses. In May 2012, the Medical Journal of Australia hosted the MJA Clinical Trials Research Summit. This article was written on behalf of contributors to a working group discussion on networking held during that summit.

There are enormous advantages for clinical researchers working together in networks. Centralised coordination and accumulation of data will provide both greater statistical power to answer common research questions and opportunities to resolve uncertainties about hard clinical end points with the greatest impact on participants’ lives. Centralising these functions allows clinical trials to be performed efficiently. Important roles for research networks are summarised in the Box.

It would be easiest to form and sustain networks if there were an umbrella group to help foster such networks, and a business case can be derived to support this.1 Such an umbrella group could advocate for the importance of clinical research in improving health care for all Australians, provide the infrastructure to maintain local clinical research networks, and help foster and maintain new clinical trials sites. An umbrella group could help to leverage additional funding from government, community and commercial sources for worthwhile research projects. Additional funding could also assist in bringing groups in similar research fields together, in providing access to common resources and experienced staff, in enabling collaboration to develop standard operating procedures, and in helping groups to obtain access to databases and web-based (and e-health) functionality. Biostatisticians and methodologists could be shared between groups.

A central overview of the research proposals of research groups in the network may identify projects that would be best funded by project grants as opposed to those that could be part of larger program funding. Even in the short term, a coordinating group could help local researchers add value to their clinical trials by linking them with experts in building quality-of-life studies into Phase III trials, or adding DNA or epigenetic substudies, which can add significant value when nested into the original study design. All these opportunities can only be realised by bringing together people whose work may otherwise be in very different arenas.

An umbrella group may help enable clinical data and pathology records to be linked with established blood, DNA, and tissue banks, and help to form new biobanking resources and expertise. The most common data linkage is between cancer registries and death registries so that incidence and survival data are linked. Tissue banks allow linkage of pathology specimens with clinical registry data to explore prognostic factors. These linkages must be governed by policies protecting the privacy of individual data, which should be de-identified in the final reports. There is also the opportunity to link existing disease-specific trials groups, such as in the Australasian Stroke Trials Network or the Australia and New Zealand Breast Cancer Trials Group.

An umbrella group could bring together networks for scientific meetings and meetings on topics of common interest around the funding, infrastructure and management of clinical research. For example, a significant area of interest is the translation of research not only from the laboratory to the clinic, but from the clinic into economic and public policy. An umbrella group could help networks make the connections necessary to facilitate these aspects too.

The alliance we propose is not designed to prescribe the composition or governance of groups. However, it should add value by facilitating the development of clinical research groups and helping to achieve long-term funding to resource and sustain them. Networks of research groups could, for example, be formed within a specialty college or by special interest groups, or within networks of hospitals or universities. Some of the funding to support the research would come from within such groupings. Multiple models of networks would evolve to best fit the clinical research activity and existing clinical and research relationships. The sustainability of groups would be predicated on sharing the role of principal investigator across the centres in the networks for different studies, and continuing to attract young investigators into the networks and mentor them.

As connections between groups become well established, clinical research groups could mentor newly established groups. This would enable experience, adaptable resource materials and even infrastructure to be shared, which could help to ensure a more rapid route to productivity and world-class-standard research work for new trials groups. The ability to share infrastructure and even experienced research staff could make new initiatives less costly to undertake.

An alliance or umbrella group could also identify where gaps exist in clinical research funding. For example, it could identify research that still needs to be undertaken by understanding where evidence is lacking from practice guidelines, and knowing what trials are ongoing from trials registries. Further, an alliance could identify deficiencies in funding for specific levels of research personnel, such as mid-career researchers.

In the longer term, it is possible to envisage such an umbrella group facilitating the development of accreditation standards for clinical trials groups and their investigators. It could liaise with consumer groups to strengthen consumer input into clinical trials for the mutual benefit of aligning research directions with consumer priorities as much as possible. Currently, this happens only sporadically.

Funding for an umbrella group is a key issue. The National Health and Medical Research Council would be an ideal body to consider this, and has previously had schemes such as enabling grants and program grants to fund large initiatives involving groups of researchers. What is needed is sustainable infrastructure funding to give stability to the clinical trials networks and their research teams, and to allow planning of not only Phase I to Phase IV clinical trials, but also larger longitudinal clinical studies, which are also necessary to inform clinical practice.2 It would be unreasonable for one body to be expected to provide all of the funding required, so the umbrella group must be able to leverage funding from the spectrum of parties involved in clinical trials.

An important role for an umbrella group for clinical trials is advocacy, to encourage support for clinical research itself. This involves not only showing the clinically beneficial outcomes of trials that better inform health care for patients with similar conditions, but also the high return on investment that has already been shown by putting the results from clinical trials into practice. Highlighting the better outcomes that have been reported in patients participating in centres with trials programs is a major factor. Key partners in this advocacy are patients and their families, particularly those who have been involved with research and have personally experienced its benefits.

More than just advocating for funding, the umbrella group would be promoting a culture of research in all health care settings, including in primary care and in our hospitals, where the challenges of funding patient care in the short term can become all-consuming. Research not only leads to improved medical outcomes in those hospitals that participate in trials, but is cost-effective because even the control arm of a randomised trial often attracts funding that can save on routine care costs.3,4

Roles for a research network

  • Networks of researchers formed will be more effective at promoting a research culture and securing sustained funding.

  • An umbrella group should be responsible for:

    • providing centralised expertise

    • biostatistical support

    • adding value to clinical trials by identifying substudies

    • bringing the networks together to explore common interests.

Prescribing trends before and after implementation of an antimicrobial stewardship program

Up to 50% of antimicrobial agents prescribed to hospital inpatients are considered to be inappropriate,1,2 and this excess use has been associated with increased mortality, adverse drug reactions and the development of resistant bacteria.3,4 The Australian Commission on Safety and Quality in Health Care recently published recommendations for hospital-based antimicrobial stewardship programs.2 A variety of approaches are available to implement these recommendations, including dissemination of guidelines, education, restricting antimicrobial availability and postprescribing audit and review.

We aimed to evaluate changes in antimicrobial prescribing after the implementation of an antimicrobial stewardship program in a specialist tertiary referral hospital.

Methods

Setting

Alfred Health is a health service comprising three hospitals in metropolitan Melbourne. The largest campus, the Alfred Hospital, is a 430-bed tertiary teaching hospital with medicine, surgery and trauma services. It includes immunocompromised populations (including patients with HIV, cystic fibrosis and heart/lung transplantation, and haematology and bone marrow transplantation) and is supported by a 35-bed intensive care unit (ICU).

Antimicrobial stewardship program

We have previously described the preliminary activities of the antimicrobial stewardship team.5 A web-based antimicrobial approval system (Guidance MS, Melbourne Health) was rolled out from October 20106 and a full-time pharmacist was appointed in January 2011. Before this, authorisation to prescribe restricted antimicrobial agents required approval from infectious diseases (ID) registrars, but auditing had suggested poor compliance. In the new system, online approval could be obtained to use restricted antimicrobials for pre-approved indications that were included in national or local consensus guidelines. Short-term approval was granted for other indications specified by the clinician (non-standard indications). Pharmacists could alert the antimicrobial stewardship team of unauthorised antimicrobial use exceeding 24 hours (pharmacist alerts).

Non-ICU antimicrobial stewardship ward rounds (by the stewardship pharmacist and either an ID registrar and/or an ID physician, on weekdays) commenced in January 2011. Each round comprised a focused review of clinical notes and results of investigations aimed at establishing the indication, planned duration, appropriateness, and alternatives to the use of restricted antimicrobial agents. Recommendations were discussed with the treating team and documented in writing; the final decision regarding patient management was the responsibility of the treating team. Patients who required more in-depth management advice were referred to the ID consult service.

Patients were reviewed by the stewardship team if they were receiving at least one restricted antimicrobial for a non-standard indication, where approval had expired, or where a pharmacist alert had been created. At our hospital, 13 restricted antimicrobial agents require web-based approval: amikacin, azithromycin, cefepime, ceftazidime, ceftriaxone, ciprofloxacin, meropenem, moxifloxacin, piperacillin/tazobactam, teicoplanin, ticarcillin/clavulanate, tobramycin and vancomycin. Patients were not reviewed by the antimicrobial stewardship team if they had already received a formal ID consult, or were admitted under lung transplant/cystic fibrosis, haematology and bone marrow transplant, or burns services, where ID physicians performed regular ward rounds (Box 1).

For several years in the ICU, the microbiology registrar has discussed results and antimicrobial treatments with ICU teams daily (supported by an ID physician three times per week). The stewardship pharmacist augmented this from January 2011 with all patients reviewed routinely. In December 2010, there was also a change to empirical ICU guidelines for health care-acquired sepsis, from ticarcillin/clavulanate or cefepime (for early and late sepsis, respectively) to piperacillin/tazobactam (regardless of onset), in all cases combined with an aminoglycoside, except when combined with quinolone in specified situations. Recommendations for vancomycin use did not change.

Outcome measures

We compared trends in the rate of use of antimicrobial classes before stewardship implementation (January 2008 to December 2010) and after implementation (January 2011 to June 2012). Antimicrobial consumption quantities were converted into defined daily doses (DDD) per 1000 occupied bed-days (OBD) as part of the National Antimicrobial Utilisation Surveillance Program.7,8 Total broad-spectrum antimicrobial use was defined as the sum of usage for all classes except for aminoglycosides, which are regarded as narrow-spectrum antibiotics. Antimicrobial use is based on pharmacy purchasing data and inpatient stock distribution (excluding hospital in the home and the emergency department). Outcomes were assessed by:

  • the mean rate of antimicrobial use in the intervention period compared with the pre-intervention period;

  • model-predicted immediate change in antimicrobial use between the end of the pre-intervention period and the commencement of the intervention period (immediate change);

  • model-predicted change in the rate of antimicrobial use between the pre-intervention period and post-intervention period (change in trend);

  • the immediate change and the change in trend in antimicrobial use were both assessed using segmented Poisson regression.

We defined a clinically significant decrease in antimicrobial use as:

  • a statistically significant (P < 0.05) immediate decrease in the rate of antimicrobial use; and/or

  • a statistically significant decrease in the rate of change of antimicrobial use in the intervention period compared with the pre-intervention period.

Statistical tests were performed using Stata version 12 (StataCorp). Ethical permission to review these data was obtained from the Alfred Health Human Ethics Committee.

Results

Impact of antimicrobial stewardship rounds

Between 10 January 2011 and 30 June 2012, 2254 patients were identified as requiring review by the antimicrobial stewardship team. An antimicrobial management recommendation was made in 779 of 2254 (35%) patients, with a total of 1104 recommendations made. Of the patients for whom a recommendation was made, the median age was 66 years (range, 16–98 years) and 503 (65%) were male.

Recommendations were made in patients under 26 different treating units; 63% (490/779) of patients were managed by surgical/trauma units and 37% (289/779) were medical patients. The median duration of antimicrobial therapy before review was 2 days (interquartile range, 1–4 days). The majority of recommendations were made following pharmacy alerts (907/1104; 82%), by non-standard approvals (92/1104; 8%) or based on expiry of the current antimicrobial approval (93/1104; 8%).

Recommendations were made to modify treatment for patients on restricted broad-spectrum antimicrobials; most commonly, ceftriaxone (278), piperacillin/tazobactam (155), ciprofloxacin (99) and vancomycin (96).

In 40% (440/1104) of recommendations, antimicrobial discontinuation was suggested; in an additional 11% (123/1104), antimicrobial de-escalation was recommended; and in 13% (145/1104), an intravenous to oral switch was recommended. Escalation of antimicrobial spectrum was recommended in 2% (25/1104) of cases and antimicrobial initiation in 3% (29/1104). A formal ID consult referral was recommended on 71 occasions (6%).

In 74% (819/1104) of cases, the recommendation was accepted by the treating team. For most of the unaccepted recommendations (233/285; 82%), no reason was cited for non-acceptance. Where reasons for non-acceptance were documented, they included the use of unapproved unit protocols (13) and the insistence of a more senior doctor in the treating team (14).

Impact on overall antimicrobial use

In the ICU, total broad-spectrum antimicrobial use decreased immediately by 16.6% when the intervention commenced (P < 0.001) (Box 2). The mean total use of broad-spectrum antimicrobials fell from 1022 DDD/1000 OBD in the pre-intervention period to 937 DDD/1000 OBD in the post-intervention period. Before the intervention, the rate of broad-spectrum antimicrobial use did not change; following the intervention, it increased by 1.0% per month (P < 0.001). Changes in the use of specific classes of antimicrobials are detailed in Box 2 and Box 3.

In hospital wards other than the ICU, total broad-spectrum antimicrobial use decreased by 9.9% when the intervention commenced (P = 0.002). The mean total use of broad-spectrum antimicrobials fell from 358 DDD/1000 OBD in the pre-intervention period to 333 DDD/1000 OBD in the post-intervention period. Before the intervention, the rate of broad-spectrum antimicrobial use increased by 0.1% per month; following the intervention, it increased by 0.3% per month (P = 0.49). Changes in the use of specific classes of antimicrobials are detailed in Box 2 and Box 3.

Discussion

The antimicrobial stewardship program brought immediate reductions in the use of total broad-spectrum antimicrobials, particularly third/fourth generation cephalosporins and glycopeptides. In addition to case-by-case audit and feedback, regular stewardship rounds identified unapproved unit guidelines, provided an accessible clinical resource for junior doctors, raised awareness of appropriate antimicrobial use and reinforced the use of the web-based antimicrobial approval system. Our experience is consistent with a systematic review of stewardship programs that suggested that restrictive interventions were more likely to be successful than those based only on education or persuasion.9

The interventions that we have implemented are resource intensive, requiring a full-time pharmacist supported by part-time ID physicians (8–10 hours/week). Although a previous study has shown a decrease in several classes of broad-spectrum antimicrobials associated with a web-based approval system only,6 we felt that without an audit and feedback mechanism, this intervention would not be sustainable. Additionally, postprescribing audit and feedback recognises that appropriateness of therapy often needs to be considered on a case-by-case basis, and that broad guidelines on prescribing may not be easily applied to individual patients. Previous studies of similar interventions have found similar patterns of intervention, but on a much less intensive scale.1012 Despite this, only six of 78 respondents in an Australian survey of hospital pharmacies reported implementing regular multidisciplinary antimicrobial stewardship ward rounds.13

There are several limitations to this observational study. We were unable to definitively ascribe changes in prescribing to the intervention, due to confounders such as concurrent changes in ICU empirical treatment guidelines. Aggregated data on antimicrobial use is not able to provide a measure of appropriateness of use and does not account for changes in antimicrobial dosing. The data on antimicrobial use includes units known to be high users of broad-spectrum antimicrobials (eg, cystic fibrosis) but where the only new intervention was the introduction of the web-based approval system. A formal cost-effectiveness study was not undertaken; however, we note that the antimicrobial classes where significant decreases in use were seen are relatively inexpensive (ceftriaxone 1 g, $1; vancomycin 1 g, $3) and thus are unlikely to offset the cost of the stewardship team based on saved drug costs alone. The antimicrobial use data used in this study were based on pharmacy purchasing data and inpatient stock distribution, with purchasing practices likely to have affected use data and to have potentially introduced delays in use trends. A 2-month worldwide benzylpenicillin shortage occurred during the study period (September–November 2011), which may have affected antimicrobial use trends at this time.

We attempted to reduce potential adverse effects by using built-in safeguards, including the provision to commence antimicrobials without approval for 24 hours, routinely discussing recommendations with the clinical team, and leaving the final decision regarding changes to antimicrobial therapy to the treating clinicians. We found evidence of greater use of β-lactam–β-lactamase inhibitor combinations that offset the decreased use of other classes, particularly cephalosporins and aminogylcosides — a phenomenon termed “squeezing the antibiotic balloon”. Concerningly, in the ICU we found some evidence of a rebound in the overall use of antimicrobials, and specifically, in the use of carbapenems, fluoroquinolones and glycopeptides. Further work is required to improve the quality of prescribing and evaluate longer term effects on antimicrobial resistance and patient outcomes.

1 Existing infectious diseases services and antimicrobial stewardship interventions introduced during the study

2 Change in antimicrobial use before and after implementation of antimicrobial stewardship interventions

Before intervention


After intervention


Change


Antimicrobial class/setting

Use*

Trend
(%/month)

Use*

Trend
(%/month)

Change
in use

Immediate change
(95% CI)

Change in trend
(95% CI)§


Intensive care

Total broad spectrum

1021.8

0

937.1

1.0%

8.3%

16.6% ( 19.9%, 13.2%)

1.0% (0.7%, 1.4%)

Aminoglycosides

137.0

2.0%

75.2

0.5%

45.1%

20.3% ( 30.2%, 9.1%)

1.5% (0.4%, 2.7%)

Antipseudomonal β-lactam–β-lactamase inhibitor

129.1

0.3%

191.3

0.6%

48.2%

34.2% (21.8%, 47.9%)

0.3% ( 0.5%, 1.1%)

Carbapenems

113.8

0.4%

133.9

2.4%

17.6%

11.2% ( 20.7%, 0.6%)

2.1% (1.2%, 3.0%)

Cephalosporins
(3rd/4th generation)

219.2

0.8%

131.2

1.6%

40.2%

54.6% ( 59.0%, 49.7%)

0.8% ( 0.1%, 1.7%)

Fluoroquinolones

318.3

0.7%

278.4

0.1%

12.5%

3.3% ( 10.1%, 4.0%)

0.7% (0.1%, 1.4%)

Glycopeptides

241.4

0.2%

202.3

1.5%

16.2%

24.8% ( 31.1%, 18.0%)

1.7% (1.0%, 2.5%)

General wards (excluding intensive care)

Total broad spectrum

357.8

0.1%

333.4

0.3%

6.8%

9.9% ( 15.7%, 3.7%)

0.2% ( 0.4%, 0.8%)

Aminoglycosides

63.7

1.0%

55.8

0.7%

12.5%

9.8% ( 6.7%, 29.1%)

0.3% ( 1.1%, 1.7%)

Antipseudomonal β-lactam– β-lactamase inhibitor

50.5

0.4%

54.5

1.9%

8.1%

2.9% ( 18.5%, 15.7%)

2.3% (0.9%, 3.7%)

Carbapenems

52.9

0.4%

53.5

0.1%

1.0%

6.7% ( 10.0%, 26.5%)

0.5% ( 0.9%, 2.0%)

Cephalosporins
(3rd/4th generation)

90.1

0.5%

80.3

0.7%

10.9%

22.4% ( 32.3%, 11.1%)

0.2% ( 1.0%, 1.4%)

Fluoroquinolones

81.8

0

74.0

0.6%

9.6%

4.2% ( 16.7%, 10.3%)

0.6% ( 1.8%, 0.7%)

Glycopeptides

82.5

0.3%

71.2

0.4%

13.8%

14.2% ( 25.6%, 1.2%)

0.7% ( 2.0%, 0.5%)


* Defined daily doses per 1000 occupied bed-days. Positive represents increased use; negative, decreased use. Change in use at the time of the introduction of the intervention. § Relative change in monthly rate of use.

3 Antimicrobial use before and after implementation of the antimicrobial stewardship ward rounds, by class of antimicrobial agent

DDD/1000 OBD = defined daily doses per 1000 occupied bed-days. ICU = intensive care unit. Solid vertical line represents commencement of intervention. Dotted lines represent pre-intervention and post-intervention trends in antimicrobial use.

Long-term health and wellbeing of people affected by the 2002 Bali bombing

The 2002 Bali bombing resulted in the deaths of over 200 people, including 88 Australians and 35 Indonesians, making it the single worst act of terrorism to have affected either country.1 A further 209 people were injured, including 66 Australians who suffered severe burns and complex shrapnel wounds.2,3

Terrorism exposure may have significant long-term effects on the mental health and wellbeing of survivors. Post-traumatic stress disorder (PTSD) is the most common psychological condition observed in the aftermath of such events, but it often coexists with depression, functional impairment or substance misuse.4,5 Few studies have examined the long-term effects on terrorism survivors, although one large study found increases in PTSD between 3 and 5 years after the September 11 attacks.6,7 Risk factors included direct exposure (proximity, injury, witnessing horror), incident-related bereavement and low social support.

Bereavement that occurs in traumatic circumstances may have a considerable long-term impact on psychological distress and appears to slow the rate of recovery.8 Deaths involving deliberate violence are associated with higher prevalence of trauma conditions, depression and prolonged or “complicated” grief.8,9 Complicated grief is characterised by continuing separation distress and bereavement-related traumatic distress. While frequently comorbid with depression or PTSD, it is increasingly recognised as a distinct condition that is associated with persistent functional impairments and negative health outcomes, particularly among those bereaved through terrorism.9,10

The health and psychosocial effects of terrorism exposure have rarely been investigated beyond 3–4 years after such incidents.4,11 No studies have examined these effects among Australian survivors. Our aim was to examine the physical and mental health status of individuals directly affected by the 2002 Bali bombing, 8 years after the incident, and to determine demographic, exposure and loss-related correlates of these health outcomes.

Methods

Participants constituted a cross-sectional convenience sample of individuals who had experienced personal exposure and/or loss related to the 2002 Bali bombing and had current contact details listed with a New South Wales Ministry of Health therapeutic support program (Bali Recovery Program), where they had attended at least one consultation. Those who registered interest in response to a written invitation were contacted, given a description of the study, and asked for verbal consent. Professional interviewers from the NSW Health Survey Program completed computer-assisted telephone interviews between 9 July and 22 November 2010, excluding a period around the bombing anniversary (1–23 October). The validity of telephone-based interviews to assess stress and anxiety conditions has been demonstrated.12

Measures

We examined demographic and exposure factors to determine their relationship with physical and mental health outcomes. Exposure variables were: lifetime traumatic incident exposure,13 presence in Bali during/after the bombing, involvement in the search for missing friends/relatives (first 48 hours), and bereavement circumstance (eg, multiple loss, family). Perceived social support from family and friends was assessed with two items from the Perceived Social Support Scale,14 as well as single items regarding neighbourhood social connectedness15 and overall support since the bombing.

Current bereavement experience (“experiential” grief) was measured using six items from the Inventory of Complicated Grief-Revised (ICG-R): separation distress (longing/yearning) and cognitive, affective or behavioural items (anger, acceptance, detachment, emptiness/meaninglessness, difficulty moving on). High factor loadings related to the single underlying complicated grief factor and elapsed time since bereavement guided item selection.16

Self-rated physical health in the previous month was measured with a single validated item from the NSW Population Health Survey.15 We used the short form of the Connor-Davidson Resilience Scale (CD-RISC2) to measure current perceived personal adaptability and ability to continue to function effectively in stressful circumstances.17 A score of 7–8 indicates high personal resilience.

Anxiety, depression, agitation, psychological fatigue and associated functional impairment (ie, full days unable to manage day-to-day activities due to symptom effects) in the past month were measured using the Kessler Psychological Distress Scale (K10+). Individual scores range from 10 to 50, indicating low (10–15), moderate (16–21), high (22–29) and very high (30–50) psychological distress. The latter is indicative of a significant mental health condition.18

We used the Primary Care PTSD Screen (PC-PTSD) to measure past-month traumatic stress-related symptoms (TSRS) specific to Bali-related experiences. Single items relate to one underlying characteristic specific to PTSD: re-experiencing, numbing, avoidance and hyper-arousal. The endorsement of 3–4 symptoms indicates “probable” PTSD and the need for specialist assessment.19

Statistical analysis

The dataset was weighted by age and sex to reflect registered participants in the Bali Recovery Program (n = 115). Current physical and mental health were analysed as outcome measures, with demographic, traumatic incident exposure, perceived support and bereavement factors used as independent variables.

Responses to the support and bereavement questions were expressed as dichotomous variables, with a value of 1 assigned to responses “agree” or “strongly agree”, and 0 to “disagree”, “strongly disagree” and “don’t know”. The outcome variables of physical health, personal resilience and functional loss were dichotomised into high and low (or good and poor) outcomes. Three outcome categories based on established clinical cut-offs were adopted for psychological distress (low–moderate, high and very high) and TSRS (low, moderate and high).18,19

Analyses were performed using Stata statistical software, version 12.0 (StataCorp), with “Svy” commands to allow for adjustments for sampling weight. We used the Taylor series linearisation method to determine prevalence estimates, and χ2 tests to test for significant differences in the prevalence of physical and mental health outcomes. Due to the relatively small sample size, an α significance level (P < 0.15) was adopted, as it is a commonly used threshold for entry into multiple logistic regression analyses,20 and could provide indicative findings in the context of this exploratory study. Multiple testing using the Bonferroni correction was also carried out by dividing the target α level by the number of tests being performed. The significant adjusted P values are reported.

Ethics approval

All study protocols were approved by the ethics committees of the Northern Sydney Local Health District and the University of Western Sydney (H7143).

Results

Of 81 individuals contacted, 55 agreed to participate (68% of eligible respondents). The mean interval between the 2002 bombing and the interview was 7 years and 11 months (range, 7 y 9 m – 8 y 1 m). There were no significant differences between the respondents and the total Bali Recovery Program population in terms of mean age (P = 0.38) or male sex (P = 0.39).

Respondent characteristics

Demographic, exposure and bereavement characteristics of the respondents are shown in Box 1. Of the 55 respondents, 21 were present in Bali during or shortly after the bombing. Almost three-quarters (39/54) experienced at least one family bereavement due to the bombing. The loss of children (of adult age) was predominant (21/54). Fifteen respondents experienced multiple losses of exclusively non-family members (average of seven friends and acquaintances killed).

Physical health and personal resilience

Physical and mental health prevalence estimates for the weighted sample are presented in Box 2 and Box 3. Good physical health in the past month and high personal resilience were reported by 45 and 30 of the 55 respondents, respectively. Respondents had an aggregate resilience score on the CD-RISC2 of 6.45. Poor self-rated health showed a significant relationship with current yearning for the deceased (P = 0.04) and perceived current difficulties moving on with life after the loss (P = 0.02). Experiential bereavement factors were the only variables associated with low personal resilience: current yearning (P = 0.02); perceived detachment from others (P = 0.04); life feeling empty without the deceased (P = 0.02); and perceived difficulty moving on (P = 0.003).

Psychological distress and daily functioning

Current high and very high psychological distress was reported by seven and five respondents, respectively. High distress was significantly associated with difficulty accepting the loss (P = 0.03); feeling detached (P = 0.001); and anger (P = 0.04). Very high distress was associated with bombing-related injury (P = 0.005); current yearning (P = 0.004); difficulty moving on (P = 0.003); and life feeling empty (P < 0.001). Very high distress also showed significant inverse relationships with current marital or de facto relationship (P = 0.02); perceived family support (P = 0.03); and better neighbourhood connectedness (P = 0.02). Loss of at least one full day of functioning in the past month was reported by nine respondents (range, 0–16 days; mean, 0.74). Significantly greater functional loss was associated with bombing-related injury (P = 0.04) and not being in a current marital or de facto relationship (P = 0.04).

Traumatic stress-related symptoms

Moderate TSRS (two symptoms) and high TSRS (three or more symptoms) in the past month were reported by 11 and 15 respondents, respectively. High TSRS was positively associated with all assessed features of bereavement but no other outcome variables: current yearning (P = 0.007); difficulty accepting the loss (P = 0.005); feeling detached (P = 0.01); anger (P = 0.002); life feeling empty (P = 0.02); and difficulty moving on (P < 0.001).

Comparisons with the NSW
general population

Compared with NSW population estimates, the respondents reported greater rates of high (12.7% v 8.2%) and very high (9.1% v 2.9%) psychological distress (Box 4) and functional impairment (mean, 0.74 v 0.60 days lost in the past month).15 Respondents were significantly less likely to report low levels of psychological distress (P < 0.05). Good self-reported health was slightly higher among respondents than the population mean (81.8% v 80.4%).

Discussion

Eight years after the first Bali bombing, a substantial proportion of this directly affected group were experiencing high levels of psychological distress and TSRS. These individuals, who had sought help, were also experiencing near-normal physical health, and their aggregate resilience score fell within a “high resilience” range observed in United States population estimates.17 Although a specific comparison group is not available, access to modern treatment methods and support may have promoted these positive long-term outcomes. Notably, experiential features of grief (eg, emptiness, difficulty moving on) were the only factors associated with reduced resilience, suggesting that early intervention with a specific focus on these factors may be indicated for such groups.10

Direct exposure to disasters is considered to have a dose–response effect. Factors such as proximity, injury and perceived threat to life are consistently associated with adverse mental health effects, but they are rarely examined beyond 3–4 years after terrorism incidents because of difficulty accessing affected cohorts.11 Eight years after the Bali bombing, a significant association was observed between incident-related physical injury and both psychological distress and functional impairment. These findings extend the current literature, showing that some of the most direct forms of exposure (proximity and injury) remain substantial risk factors at this extended time point.

Social support has a positive role in mental health and may foster recovery from trauma over time.21 We found that being in a marital or de facto relationship was the only demographic factor associated with distress in the respondent group, showing an inverse relationship with high psychological distress. Inverse relationships were also observed with the broader social support factors of high perceived family support and neighbourhood social connectedness.

The strength of our findings in relation to complicated grief variables appears to highlight important aspects of grief, as it relates to terrorism violence and loss. The ability to “make sense” of a loved one’s death is considered a central process of grieving.8 However, the irrational or meaningless nature of violent death, particularly through terrorism, has been found to interfere with this cognitive process for many survivors.22 Moreover, it is associated with more severe complicated grief, higher psychological distress and poorer physical health.23 While such mediating variables cannot be inferred in relation to our data, it is notable that complicated grief symptoms in this study were also associated with distress and poor physical health, as well as TSRS. Similarly, grief symptoms were also the only factors associated with significantly lower personal resilience. This suggests that among bereaved survivors of terrorism, grief maladaptation may represent a more significant long-term risk factor for health outcomes than incident exposure or post-event variables.

Our findings have implications for the support of people directly affected by terrorism. Complicated grief factors emerged as the strongest correlates of adverse physical and mental health status. Longer-term monitoring of survivor groups is indicated, including screening programs that incorporate grief-specific items. Previous short-term screening has effectively linked survivors with evidence-based care.10,24 However, case-finding from primary care pathways was poor, suggesting that outreach is required for longer-term initiatives.24 Significantly, 7 years after the September 11 attacks, registered individuals who had escaped the World Trade Center reported difficulty accessing physical and mental health care and often failed to connect long-term symptoms with their September 11 exposures.25

Our study has some notable strengths and several limitations. The sample ostensibly constitutes a traumatically bereaved population, with varying levels of direct incident exposure and use of clinical services. Respondents may represent a more seriously affected, but possibly better supported cohort than those who did not seek services from the program. This may limit generalisability of these findings to other survivor groups. This cross-sectional study can only determine significant associations at a single time point. No conclusions can be drawn regarding longitudinal health effects or their causes.

The response rate had the potential to introduce responder bias, although no significant differences in sex or age were found between the respondent sample and the total program population. Items from the complicated grief measure (ICG-R) examined a subset of symptoms only and cannot be considered to indicate syndrome-level complicated grief.

Importantly, this analysis presents the largest study sample to date of Australians directly affected by a single terrorist incident. In completing a quantitative analysis of their health status 8 years after a major bombing, it also represents, to our knowledge, the longest follow-up period of a terrorism-affected population reported in the literature.

1 Demographic, exposure and bereavement-related variables of the respondents

Variable

Unweighted
(n = 55)

Weighted
(n = 115)


Mean age (range)

50 (20–73)

50 (42–53)

Male

23 (41.8%)

48 (41.8%)

Education

University degree

15 (27.2%)

31 (27.3%)

TAFE certificate or diploma

17 (30.9%)

36 (30.9%)

Higher school certificate

11 (20.0%)

23 (20.0%)

School certificate

9 (16.4%)

19 (16.4%)

Other

3 (5.5%)

6 (5.5%)

Marital status

Married or de facto

40 (72.7%)

84 (72.7%)

Widowed

3 (5.5%)

6 (5.5%)

Separated or divorced

4 (7.3%)

8 (7.3%)

Never married

8 (14.5%)

17 (14.5%)

Location during/after bombing

Bali, in club*

6 (10.9%)

13 (10.9%)

Bali, near club*

3 (5.5%)

6 (5.5%)

Bali, not nearby

3 (5.5%)

6 (5.5%)

Bali, arrived after bombing

9 (16.4%)

19 (16.4%)

Not in Bali

34 (61.8%)

71 (61.8%)

Injured during bombing

No

48 (87.3%)

100 (87.3%)

Yes

7 (12.7%)

15 (12.7%)

Involved in search (first 48 hours)

No

39 (70.9%)

84 (72.7%)

Yes

16 (29.1%)

31 (27.3%)

Primary bereavement type

Child

21 (38.2%)

44 (38.9%)

Sibling

11 (20.0%)

23 (20.4%)

Spouse

3 (5.5%)

7 (5.6%)

Other family member

4 (7.3%)

9 (7.4%)

Non-family member(s)

15 (27.3%)

32 (27.8%)

Loss

Single family member

29 (52.7%)

61 (53.7%)

Multiple family members

4 (7.3%)

9 (7.4%)

Multiple family and non-family

6 (10.9%)

13 (11.1%)

Multiple non-family

15 (27.3%)

32 (27.8%)


TAFE = technical and further education. * Bomb site.
Bereaved respondents (n = 54).

2 Prevalence estimates of individual health and wellbeing indicators in the weighted sample, by sociodemographic and perceived support factors

Covariate

Good self-rated health

High
resilience*

Functional loss

High
distress

Very high distress

Moderate TSRS§

High
TSRS§


Sex

Male

78.3%

60.9%

17.4%

8.7%

4.3%

13.0%

21.7%

Female

84.4%

53.1%

15.6%

15.6%

12.5%

25.0%

34.4%

Age

18–40 years

73.3%

57.9%

26.3%

10.5%

10.5%

26.3%

26.3%

> 40 years

86.1%

55.6%

11.1%

13.9%

8.3%

16.7%

30.6%

Education

University

86.7%

66.7%

20.0%

13.3%

6.7%

20.0%

26.7%

High school/other

80.0%

52.5%

15.0%

12.5%

10.0%

20.0%

30.0%

Employed

No

75.0%

41.7%

25.0%

16.7%

8.3%

8.3%

50.0%

Yes

83.7%

60.5%

14.0%

11.6%

9.3%

23.3%

23.3%

Household income

≤ $60 000 

77.8%

44.4%

27.8%

22.2%

11.1%

16.7%

22.2%

> $60 000 

82.9%

62.9%

11.4%

8.6%

5.7%

20.0%

31.4%

Marital status

Married or partnered

85.0%

52.5%

10.0%

12.5%

2.5%

17.5%

30.0%

Not married or partnered

73.3%

66.7%

33.0%**

13.3%

26.7%**

26.7%

26.7%

Have children

No

75.0%

58.3%

16.7%

8.3%

16.7%

33.3%

33.3%

Yes

83.7%

55.8%

16.3%

14.0%

7.0%

16.3%

27.9%

Perceived support, family

Low

80.0%

60.0%

20.0%

20.0%

40.0%

40.0%

20.0%

High

82.0%

56.0%

16.0%

12.0%

6.0%**

18.0%

30.4%

Perceived support, friends

Low

85.7%

42.9%

14.3%

14.3%

14.3%

0.0%

42.9%

High

81.3%

58.3%

16.7%

12.5%

8.3%

22.9%

27.1%

Social connections, neighbourhood

Low

88.9%

66.7%

33.3%

11.1%

33.3%

33.3%

22.2%

High

80.4%

54.3%

13.0%

13.0%

4.3%**

17.4%

30.0%

Long-term support, all sources

Low

75.0%

75.0%

0.0%

25.0%

12.5%

0.0%

37.5%

High

83.0%

53.2%

19.1%

10.6%

8.5%

23.4%

27.7%

Total
respondent group (95% CI)

81.8%
(68.9%–
90.1%)

56.4% (42.7%–69.1%)

16.4% (8.6%–29.0%)

12.7% (6.0%–24.8%)

9.1% (3.7%–20.5%)

20.0% (11.2%–33.1%)

29.1% (18.4%–42.8%)


TSRS = traumatic stress-related symptoms. * Score of 7–8 on the short form of the Connor-Davidson Resilience Scale. Unable to complete usual activities on 1 or more days in previous month. Score of 22–29 on the Kessler Psychological Distress Scale indicates high psychological distress; 30–50 indicates very high distress. § Two symptoms on Primary Care PTSD Screen indicates moderate TSRS; 3–4 symptoms indicates high TSRS. P < 0.15. ** P < 0.05.

3 Prevalence estimates of individual health and wellbeing indicators in the weighted sample, by incident exposure and bereavement factors

Covariate

Good
self-rated health

High
resilience*

Functional loss

High
distress

Very high
distress

Moderate TSRS§

High
TSRS§


Lifetime exposure

Low

84.0%

60.0%

16.0%

16.0%

4.0%

32.0%

32.0%

High

80.0%

53.3%

16.7%

10.0%

13.3%

10.0%

26.7%

In Bali during or after bombing

No

88.2%

55.9%

11.8%

11.8%

2.9%

20.6%

20.6%

Yes

71.4%

57.1%

23.8%

14.3%

19.0%

19.0%

42.9%

Injured during bombing

No

83.3%

58.3%

12.5%

12.5%

4.2%

18.8%

27.1%

Yes

71.4%

42.9%

42.9%§§

14.3%

42.9%§§

28.6%

42.9%

Involved in search (first 48 hours)

No

87.5%

60.0%

12.5%

12.5%

5.0%

22.5%

22.5%

Yes

66.7%

46.7%

26.7%

13.3%

20.0%

13.3%

46.7%

Bereavement**

Non-family member(s)

80.0%

46.7%

20.0%

13.3%

13.3%

20.0%

40.0%

Family member(s)

84.6%

61.5%

12.8%

10.3%

7.7%

20.5%

23.1%

Bereavement involved child**

No

81.8%

57.6%

15.2%

9.1%

9.1%

21.2%

30.3%

Yes

85.7%

57.1%

14.3%

14.3%

9.5%

19.0%

23.8%

Current yearning for loved one(s)**

No

93.1%

72.4%

10.3%

3.4%

0.0%

20.7%

10.3%

Yes

72.0%§§

40.0%§§

20.0%

20.0%

20.0%§§

20.0%

48.0%§§

Difficulty accepting loss**

No

90.6%

62.5%

15.6%

3.1%

6.3%

18.8%

12.5%

Yes

71.4%

47.6%

14.3%

23.8%§§

14.3%

19.0%

52.4%§§

Feel detached from others**

No

84.3%

60.8%

15.7%

7.8%

7.8%

21.6%

23.5%

Yes

66.7%

0.0%§§

0.0%

66.7%§§

33.3%

0.0%

100.0%§§

Feel angry about loss**

No

92.0%

60.0%

16.0%

0.0%

8.0%

24.0%

4.0%

Yes

75.9%

55.2%

13.8%

20.7%§§

10.3%

17.2%

48.3%§§

Life feels empty without loved one(s)**

No

86.4%

65.9%

11.4%

6.8%

2.3%

22.7%

20.5%

Yes

66.7%

22.2%§§

22.2%

33.3%

33.3%

11.1%

66.7%§§

Moving on remains difficult**

No

87.5%

64.6%

12.5%

8.3%

6.2%

22.9%

18.8%

Yes

50.0%§§

0.0%§§

21.4%

33.3%

33.3%§§

0.0%

100.0%

Total
respondent group
(95% CI)

81.8% (68.9%–90.1%)

56.4% (42.7%–69.1%)

16.4% (8.6%–29.0%)

12.7% (6.0%–24.8%)

9.1%
(3.7%–20.5%)

20.0% (11.2%–33.1%)

29.1% (18.4%–42.8%)


TSRS = traumatic stress-related symptoms. * Score of 7–8 on the short form of the Connor-Davidson Resilience Scale. Unable to complete usual activities on 1 or more days in previous month. Score of 22–29 on the Kessler Psychological Distress Scale indicates high psychological distress; 30–50 indicates very high distress. § Two symptoms on Primary Care PTSD Screen indicates moderate TSRS; 3–4 symptoms indicates high TSRS. Lifetime exposure to potentially traumatising events: low = 1–2 events; high ≥ 3 events (excluding Bali exposure). ** Bereaved respondents (n = 54). Non-dependent child. P < 0.15. §§ P < 0.05. P < 0.001 (Bonferroni adjusted).

4 Population comparison of prevalence estimates* of psychological distress in the past month

* Bali Recovery Program 2010 weighted sample and New South Wales population weighted prevalence estimates (2010).15 Measured using the Kessler Psychological Distress Scale. P < 0.05.

Australian national birthweight percentiles by sex and gestational age, 1998–2007

Typographical errors in tables of birthweight percentiles: In “Australian national birthweight percentiles by sex and gestational age, 1998–2007”, in the 3 September 2012 issue of the Journal (Med J Aust 2012; 197: 291-294; doi: 10.5694/mja11.11331), there were two errors in Box 3 and one in Box 4 (page 293). In Box 3, the 25th percentile for boys with a gestational age of 34 weeks should be 2100 g, not 200 g, and the 3rd percentile for boys with a gestational age of 36 weeks should be 2015 g, not 015 g. In Box 4, the 97th percentile for girls with a gestational age of 31 weeks should be 2146 g, not 246 g.

The corrected birthweight percentile tables are available online (https://www.mja.com.au/journal/2012/197/5/australian-national-birthweight-percentiles-sex-and-gestational-age-1998-2007).

Clinical effectiveness research: a critical need for health sector research governance capacity

The barriers to conduct of clinical research will require solutions if we are to implement evidence-based health care reform

Reforms in the funding of health services, such
as “activity-based” funding initiatives, seek to facilitate changes in how health care is delivered, leading to greater efficiency while maintaining effectiveness. However, often these changes in treatment strategies and service provision evolve without evidence demonstrating effectiveness in terms of patient outcomes. The pressures on health care expenditure (currently around 9% of gross domestic product1) make such an approach untenable and unsustainable. The evidence necessary to support these initiatives can only be derived through carefully conducted clinical research. Most readers would immediately think of clinical trials in terms of pharmaceuticals or clinical devices, and this type of research is critically important, although continuing to decline, in Australia.2 Other questions relate to the effectiveness of changes in health practice or policy, usually (but not always) based on sensible ideas that seem self-evident. However, in order to function with an evidence base, these ideas need to be proven to be clinically effective and cost-effective. Such research can be costly, and many of the questions to be addressed are not ones that would be the subject of an industry-sponsored trial. Researchers, clinicians and health administrators are therefore faced with the problem of how best to measure the outcomes of changes to health care strategies, without the necessary resources to ask and answer the question.

The MJA Clinical Trials Research Summit held in Sydney on 18 May 2012 included a working group addressing issues of research governance and ethics. The key discussion outcomes of that group were:

  • confusion exists regarding the differences between ethics and governance;

  • variability continues in state and federal legislation and regulations, despite attempts at harmonisation;

  • processes for improvement at government and institutional levels are underway but are not yet complete or implemented;

  • hospital boards and chief executive officers need to have incentives to make the infrastructure work;

  • substantial challenges exist when working with international investigator-initiated trials;

  • trials involving the private health sector include specific difficulties such as insurance and contracts; and

  • national accreditation of researchers and training should be considered.

Costs are not the only barrier. Efforts to rationalise health care provision on the basis of evidence provided through the conduct of clinical research are also hampered by existing or perceived obstacles in the form of cumbersome institutional research governance and ethics approval processes. Substantial changes and streamlining of the processes of ethical review are underway across Australia, addressing inconsistencies and inefficiencies of human research ethics committee approval, financial processes, and contractual clinical research governance processes. Nevertheless, the system remains complex, slow and expensive. Unfortunately, the old adage of “good, quick or cheap: pick two” still applies.

Many researchers fail to distinguish between research governance and ethics. Clinical research in Australia is governed by the National Health and Medical Research Council (NHMRC) National statement on ethical conduct in human research3 and the Australian code for the responsible conduct of research.4 Research governance can “be understood as comprising distinct elements ranging
from the consideration of budgets and insurance, to the management and conduct of scientific and ethics review”.5 Research governance thus includes oversight of all processes of ethics review, but also includes responsibilities of both investigators and institutions for the quality and safety of their research.3

The Harmonisation of Multi-centre Ethical Review (HoMER) initiative by the NHMRC is a significant step forward, enabling a single ethics review process that has been adapted for several states. This process, if used effectively, should reduce the resources required to obtain ethics approval for multicentre research, but it has also created some challenges in ensuring that research governance obligations are maintained within various health service jurisdictions.6 Currently, no incentives or requirements exist for health services or hospital chief executive officers to ensure that appropriate infrastructure is in place and working. Similarly, a different set of challenges arises when considering performing research in the private sector, where insurance and contractual issues may differ substantially from those in the public sector.

Much of the non-industry-sponsored clinical research performed in Australia is investigator-initiated research, supported by funding organisations such as the NHMRC, state governments, and other non-government organisations such as Cancer Council Australia, the National Heart Foundation of Australia and cooperative clinical trial groups. At present, investigator-initiated trials require comparable levels of research governance and are certainly subject to the same requirements for good clinical practice as industry-sponsored trials. The research questions addressed by these studies are based on clinical imperatives, a broad understanding of the underlying science, and a necessary ability to work on a shoestring — the latter being the main point of distinction from industry-sponsored trials. Current models of competitive research grant funding do not recognise the complexities, duration, costs and distribution of costs across the length of a clinical trial, especially when considering late clinical outcomes that are often the most clinically relevant ones. As an example, an NHMRC project grant can be funded for at most 5 years and therefore necessitates a focus on end points occurring within end-point time frames. The clinical questions that we and the community recognise as important might not be able to be answered with such designs. The resources required to meet these requirements continue to escalate and we currently run the risk that these trials will soon be untenable in Australia. Anecdotally, many academic clinical research units are already questioning what level of involvement they should have in such relatively underresourced trials or if they should be involved at all, for the most part purely for financial reasons.

Within the current Australian health care environment, clinical research is being conducted in the face of significant headwinds. These inefficiencies arise from resource costs due to complex governance arrangements combined with those of research conduct (Box). Processes to be considered that will improve clinical research capacity might include:

  • continued adoption of electronic health records that span clinical, investigative (ie, pathology and radiology) and therapeutic information (eg, the Australian Orthopaedic Association National Joint Replacement Registry);

  • data-linkage techniques to obtain clinical outcomes
    (eg, hospital readmission data, Medicare Benefits Schedule and Pharmaceutical Benefits Scheme use data, the National Death Index);

  • better integration of research into routine clinical practice;

  • national accreditation of investigators;

  • standardised good clinical practice training;

  • increased profile for research participation at the clinician–patient level, enabling the conduct of studies that are more representative of a wide spectrum of patients;

  • development of a clinically relevant strategic research agenda led by collaborations between clinicians, researchers and health policy decisionmakers;

  • a culture shift where lawyers and hospitals communicate and quantify the risk or research appropriately.

Research developed through partnerships between health policymakers and health service providers should lead to outcomes that are more immediately relevant and translatable to the care we provide, the outcomes we achieve and the costs incurred by the health system. Reinvestment of financial and efficiency gains realised from initial research outcomes back into the next relevant translational research question provides a model for a sustainable health system that evolves with the support of a robust clinical research-driven evidence base. These financial windfalls currently go back into government coffers and ideally should be seen as a potential funding stream to support future clinical research.

As the demands on our health system continue to mount, the need for clinical effectiveness research to build a robust evidence base upon which to reform care has become even more acute. It will be critical to align the clinical and policy research agenda while strengthening the governance structures that facilitate the conduct of research within the clinical space if we are to develop “an agile, responsive and self-improving health system for future generations”.7

Key points

Barriers to clinical research include:

  • regulatory complexity

  • inflexibility of ethical review and oversight

  • funding models that are not designed to support clinical trials

  • lack of incentive for engagement of health services in research support.

Solutions may include:

  • different funding models, including support for longer time frames

  • simplification of ethical and governance processes recognising the different goals of industry- versus investigator-initiated research

  • better involvement by health services in supporting research

  • return of savings from clinical research to support further research

  • clinical research key performance indicators for health service administrators.