×

The approach to patients with possible cardiac chest pain

Chest pain is a confronting symptom for patients and clinicians alike. Some patients presenting with chest pain will have serious acute illness with a high short-term risk of mortality, but this will be excluded in most patients. Chest pain is one of the most common causes of attendance at hospital emergency departments (EDs) and a frequent cause of presentations to general practice.1 Missed diagnosis, with associated adverse outcomes, can occur when chest pain assessment is based on clinical features alone.2

Regardless of the clinical setting, a stepwise approach should be applied to patients with chest pain (Box 1). In the absence of trauma, the primary focus should be exclusion of four potentially fatal conditions: acute coronary syndrome (ACS; encompassing acute myocardial infarction and unstable angina), pulmonary embolism, aortic dissection and spontaneous pneumothorax. ACS is by far the most common of these. All these conditions may present without immediately obvious physical signs, but the latter three may be accurately excluded by rapid diagnostic testing (predominantly medical imaging). However, ACS is more challenging as it cannot be readily excluded with an acceptable level of accuracy on initial clinical evaluation or with a single investigation. After excluding these conditions, attention should be turned to chronic but serious conditions that may require additional evaluation, such as stable coronary artery disease or aortic stenosis. Next, non-life threatening conditions that may benefit from specific therapy (eg, herpes zoster, gastro-oesophageal reflux) should be considered. If the clinician is confident that all these causes have been excluded, the patient can be reassured that the chest pain is due to an insignificant cause.

Here, we focus on several evolving areas relating to the assessment of patients with possible cardiac chest pain, including risk stratification, cardiac biomarkers and the role of non-invasive testing for myocardial ischaemia and coronary artery disease. This article is aimed at all clinicians who assess patients with acute, undifferentiated chest pain. It is not a systematic review, but we have directed the reader towards these where appropriate.

The aim of chest pain assessment

There is a dichotomy in the assessment of patients with possible ACS. First, early and accurate identification of patients with ST-segment-elevation myocardial infarction (STEMI) enables provision of emergency reperfusion therapy, which has a major impact on outcome, while accurate identification of patients with other types of ACS (non-ST-segment elevation myocardial infarction [NSTEMI] or unstable angina) allows for early initiation of targeted treatment known to improve outcomes in these groups. Second, accurate exclusion of myocardial ischaemia in patients with chest pain is essential to minimise the morbidity and mortality associated with missed diagnoses, while avoiding unnecessary overinvestigation in those without the disease. However, assessment is complex because of the diversity of clinical presentations of ACS and the lack of a single diagnostic test for the entire spectrum of disease.

Despite recognition that clinical systems are imperfect, a high degree of safety in chest pain assessment processes is demanded. A recent large survey of emergency physicians suggests that the target rate of unexpected adverse outcomes in patients with a negative chest pain assessment should be < 1% at 30 days;3 this target is likely to be equally stringent in primary care. Achieving this level of safety in a timely and cost-effective fashion in an era of increasing demand on acute services presents challenges. This must be considered when the potential value and safety of new developments are assessed.

Clinical approach and risk stratification

Most current diagnostic strategies for acute chest pain focus on the identification of ACS and are based on the premise that other obvious diagnoses have been excluded with accurate clinical assessment.

Systematic reviews of the diagnostic value of clinical features in the assessment of chest pain have largely been carried out in hospital settings,4 where the prevalence of serious disease is higher than in general practice. It is widely understood that no single clinical feature or combination of features can be used to exclude ACS with sufficient sensitivity to obviate the need for further investigation. Thus, a strategic approach based on clinical risk stratification, a period of observation, electrocardiography and serial biomarker evaluation has emerged. In all settings, a 12-lead electrocardiogram (ECG) should be performed immediately in patients presenting acutely with chest pain to exclude ST-segment-elevation.

In general practice, the aim should be to differentiate patients who require urgent hospital-based assessment for possible ACS from those with more stable symptoms who may be investigated on an outpatient basis. Limited access to investigations encourages the use of clinical judgement or clinically based decision rules to triage patients who can continue to be managed safely in primary care. Several such decision rules exist, but with limited validation for use in primary care. Despite the lower prevalence of coronary disease in patients presenting with chest pain in primary care, the same limitations found in hospital-based cohorts apply to the value of clinical assessment. A recent well conducted Swedish study concluded that the accuracy of clinical assessment of chest pain by general practitioners was high, but insufficient to safely rule out coronary artery disease.5 Clinicians in general practice should refer patients promptly to hospital for assessment when features suggesting a diagnosis of ACS are present (Box 2).

The value of further investigation in general practice of patients with an acute onset or ongoing symptoms is limited, given that a normal ECG cannot exclude a significant short-term risk of an adverse outcome, and serial biomarker testing is required to exclude myocardial infarction. Nevertheless, in all settings, the resting ECG has a critical role in identifying patients with ST-segment elevation who require emergency reperfusion therapy. Patients with suspected ACS and ongoing pain, pain within the past 12 hours that has resolved but with an abnormal ECG, or other high-risk features (Box 3) should be referred to hospital as an emergency. Given the release kinetics of troponin, a single troponin test may have value in assessing patients with a normal ECG and no high-risk features who present more than 12 hours after resolution of symptoms suggestive of ACS. In such cases, appropriate mechanisms must be in place for prompt review of results and referral to hospital where necessary. If these facilities are unavailable, patients should be referred to the ED for same-day chest pain assessment.

Demographic and cardiovascular risk factors, such as age and sex, influence population risk of disease but should not unduly influence the assessment of individual patients. In the absence of a clear alternative diagnosis, most patients will require additional investigation to exclude coronary artery disease, and the critical decision is usually not whether, but with what urgency, this should be undertaken. In some countries, rapid-access chest pain assessment clinics offering early assessment of patients (usually within 14 days) have become an integral part of strategies for chest pain assessment as an alternative to ED-based assessment.6 However, these have not been widely implemented in Australia, and all acute care facilities with an ED should have an evidence-based strategic approach to assessing patients with chest pain.

Patients should be stratified as being at low, intermediate or high risk of short-term adverse outcomes in the context of possible ACS, in line with the joint guidelines of the National Heart Foundation and Cardiac Society of Australia and New Zealand (NHF/CSANZ) stratifying patients with ACS (Box 3).6 This model has performed well in the ED setting, with 30-day risks of adverse cardiac outcome of 0, 7% and 26% in these risk strata, respectively, when the criteria were strictly applied in one cohort.7 Risk stratification models may have greater utility in the ED, where the prevalence of ACS is about 10% (compared with primary care, where rates are lower) and where facilities to further assess patients at increased risk are readily available. The main limitation of this risk stratification model is that few patients qualify as low risk when the criteria are strictly applied. Alternative approaches include the Thrombolysis in Myocardial Infarction (TIMI) score, the Global Registry of Acute Coronary Events (GRACE) score and the GRACE Freedom-from-Event score.8,9 These models, derived from higher-risk populations, were not designed to identify low-risk patients who do not require detailed assessment for exclusion of ACS. Consequently, none can be relied on to identify patients who can be safely discharged from the ED without some period of observation and additional investigation. Nevertheless, risk stratification is essential to guide the appropriate use of resources based on pretest probability of ACS.

Cardiac biomarkers

Cardiac troponin levels have a central role in the diagnosis of acute myocardial infarction.10 After exclusion of ST-segment elevation and dynamic ST-segment electrocardiographic changes, serial biomarker testing identifies the remaining patients with acute myocardial infarction. Protocols for the use of serial troponin measurements have largely been based on release kinetics in experimental conditions and have tended to require waiting 6–8 hours (or longer) after presentation for the second test. Recent advances in high-sensitivity assays that allow a much shorter interval of 2 hours before the second test and incorporation of serial biomarker levels into overall risk stratification models (Box 4) have demonstrated safe accelerated processes with robust clinical outcome data.11,12 These approaches have yet to be incorporated into clinical guidelines, but almost certainly will be in the foreseeable future.

Troponin levels are considered abnormal when they exceed the 99th percentile of a healthy reference population using an assay with sufficient accuracy at this level (< 10% coefficient of variation). In practice, few available assays have possessed sufficient accuracy at this level.13 The recent development of high-sensitivity assays with this level of accuracy and lower levels of detection allows measurable troponin levels to be recorded in most of the healthy population. These assays offer the promise of being able to rule out acute myocardial infarction earlier than was possible with less sensitive assays, as well as further acceleration of risk stratification models, but with the probable cost of diminished specificity.14 This will require clinicians to have a better understanding of the causes of elevated troponin levels and the kinetics of troponin release at these new lower levels of detection, possibly by incorporating values expressing change or “delta” troponin.15 The use of delta troponin values has been incorporated into the 2011 addendum to the NHF/CSANZ guidelines, but the evidence for the best approach is still emerging.16 It is imperative that clinicians have a clear understanding of the characteristics of the local troponin assay used, as reference intervals are not transferable between different troponin assays.

Investigations for myocardial ischaemia and coronary artery disease

In two groups of patients — those who present with symptoms of ACS and in whom myocardial infarction has been excluded, and those with a stable pattern of chest pain symptoms in whom angina cannot be excluded — additional testing is required to identify those who have prognostically important coronary artery disease or unstable angina. This is an area where well established diagnostic tests exist alongside more recent developments, such as computed tomography coronary angiography (CTCA). The anatomical and pathophysiological bases for these tests are not interchangeable, with some depending on the detection of abnormal coronary blood flow (myocardial perfusion scanning) or myocardial ischaemia (stress electrocardiography and stress echocardiography), while invasive angiography and CTCA demonstrate the anatomical basis of coronary artery disease. Each investigation has different limitations depending on patient factors and the need for contrast media and ionising radiation, and the availability of each may depend on access, cost and local expertise (Box 5).

Non-invasive testing for myocardial ischaemia or coronary artery disease is of most value to patients with intermediate pretest probability of an ACS. In patients with very low risk of coronary artery disease who have symptoms of non-ischaemic pain, other causes of chest pain should be actively excluded before investigations for myocardial ischaemia or coronary atheroma are considered. Similarly, it may be futile to embark on non-invasive testing (with an attendant risk of a false negative result) in a patient with typical symptoms and a very high risk of coronary artery disease. In such cases, prompt specialist referral for consideration of an early invasive strategy should be the first step.

Investigations may identify the presence or effects of coronary artery stenosis but, where this cannot be achieved, a broader aim is to further refine risk stratification to identify patients at low risk of an adverse outcome after discharge from the hospital or ED. Exercise stress electrocardiography has become largely obsolete as a means of diagnosing reversible myocardial ischaemia, due to insufficient diagnostic accuracy, but it retains a well established role in identifying patients with chest pain who can safely be discharged from the ED.17,18 Exercise stress electrocardiography may be limited by patients’ inability to exercise at an adequate level, non-specific electrocardiographic changes (particularly in the setting of an abnormal resting ECG), and false positive results, but it remains attractive by virtue of its low cost and widespread availability.

The combination of cardiac imaging with exercise or pharmacological stress testing can increase accuracy beyond electrocardiography alone (Box 6).1921 In the United States and Europe, cardiac magnetic resonance imaging has emerged as a safe, non-ionising and more accurate alternative to nuclear perfusion scanning, but it remains predominantly a research tool in Australia.23

CTCA is the most rapidly evolving test for assessing patients with chest pain and is the most sensitive non-invasive test for identifying coronary artery disease.22 Recent studies have shown that this technique allows patients to be safely discharged from the ED.24 A CTCA-based strategy may also be faster than other strategies, particularly when these rely on hospital admission for myocardial perfusion scanning.24,25 However, this finding is of limited value in Australia, where myocardial perfusion scanning has not been the principal investigation for chest pain assessment.

It is important to recognise some limitations of CTCA. Elevated heart rate, coronary calcium and obesity all impair image quality. The use of iodinated contrast media is risky in patients with renal impairment or in those taking metformin. In the widely cited Coronary Computed Tomographic Angiography for Systematic Triage of Acute Chest Pain Patients to Treatment (CT-STAT) trial, only 11% of the patients screened met the study’s inclusion criteria.25 Early studies suggested that CTCA should not be performed until after a second troponin measurement, as myocardial infarctions caused by moderate, rather than severe, coronary stenoses could potentially be missed.26 This emphasises that the strength of CTCA lies in excluding coronary atheroma. Furthermore, in the presence of known coronary artery disease, functional testing for ischaemia may be a more appropriate choice of investigation.27

Some centres perform CTCA with a total radiation dose of < 1 mSv, but in most centres, using general CT scanners without modern dose-reduction equipment, the total dose is likely to be significantly higher. Patients presenting to the ED, where there is an imperative to achieve a diagnostic study regardless of heart rate, may receive 10 mSv, although this is still lower than in most myocardial perfusion scans.28 There is now strong evidence that CT radiation can induce cancer.29 As CTCA could potentially be applied to more than 60% of patients presenting with chest pain in Australia, it is appropriate to remember that other tests are available and that CTCA has not yet demonstrated superiority in this setting. Nevertheless, CTCA is likely to become an increasingly important tool for ruling out significant coronary artery disease in patients with chest pain. Ongoing large clinical trials, such as the Prospective Multicenter Imaging Study for Evaluation of Chest Pain (PROMISE),30 should provide more definitive evidence in this area. Currently, Medicare regulations limit rebates to specialist referral for CTCA, and a robust system of credentialling has been introduced as a quality control measure.

Complex tests need to be appropriately incorporated into an overall strategy of risk-based chest pain assessment, integrating safe, accessible and cost-effective techniques that can accommodate the broadest range of patient presentations and comorbidities.

Conclusion

Chest pain is a common presenting symptom with many diagnostic challenges and pitfalls. Medicopolitical imperatives such as the National Emergency Access Target render the situation more complicated still. Both technology and the evidence base guiding the approach to the problem have developed considerably since the NHF/CSANZ first commissioned guidelines in this area in 2000. Clinicians can now benefit from a better understanding of risk stratification and enhanced diagnostic tools that make excluding avoidable short-term adverse events with a high degree of accuracy a realistic proposition. The challenge remains to implement these advances as widely as possible in an environment of constrained resources and increasing demand. This will be best achieved by an approach that integrates the technology and evidence into a comprehensive but straightforward and accessible strategy.

1 A pragmatic differential diagnosis of non-traumatic chest pain*

Life-threatening diagnoses that should not be missed:

  • Acute coronary syndrome

    • Acute myocardial infarction

    • Unstable angina pectoris

  • Acute pulmonary embolism

  • Aortic dissection

  • Spontaneous pneumothorax

Chronic conditions with an adverse prognosis that require further evaluation:

  • Angina pectoris due to stable coronary artery disease

  • Aortic stenosis

  • Aortic aneurysm

  • Lung cancer

Other acute conditions that may benefit from specific treatment:

  • Acute pericarditis

  • Pneumonia or pleurisy

  • Herpes zoster

  • Peptic ulcer disease

  • Gastro-oesophageal reflux

  • Acute cholecystitis

Other diagnoses:

  • Neuromusculoskeletal causes

  • Psychological causes


* This differential diagnosis is not intended to be exhaustive.

2 Case vignette

A 66-year-old man calls his general practitioner for advice. He has been treated for type 2 diabetes and primary prevention of cardiovascular disease for about 5 years. He calls from the airport where he is due to board an interstate flight but is concerned because he experienced 20 minutes of “burning” central chest discomfort while walking into the airport. He has had similar self-limiting symptoms with exertion for 4 weeks. During this time, he underwent upper gastrointestinal endoscopy that was unremarkable.

Comment: Even with the limited information available from a telephone call, and despite the atypical description of the chest pain, this patient exhibits several features suggestive of intermediate risk for an acute coronary syndrome.6 As such, he should be advised to attend hospital for assessment of chest pain without delay.

The patient is reviewed in the emergency room, where he has an unremarkable electrocardiogram and a cardiac troponin I level of 0.01 μg/L on admission, and 0.02 μg/L 6 hours and 25 minutes later (99th percentile of a healthy reference population,
0.04 μg/L). After the second troponin measurement, an exercise stress echocardiogram is strongly positive at a low workload, with reproduction of the index symptoms at < 50% of the predicted workload and evidence of reversible ischaemia in the territory of the left anterior descending coronary artery. He has cardiac catheterisation the same day and is found to have a critical stenosis of the mid left anterior descending artery, which is treated with percutaneous coronary intervention and deployment of a drug-eluting stent. He makes an uneventful recovery, with normal left ventricular function.

3 Features associated with high risk, intermediate risk and low risk of adverse short-term outcomes in patients presenting with chest pain due to possible acute coronary syndrome

High risk

Presentation with clinical features consistent with acute coronary syndrome (ACS) and any of the following features:

  • Repetitive or prolonged (> 10 minutes) ongoing chest pain or discomfort

  • Elevated level of at least one cardiac biomarker (troponin recommended)

  • Persistent or dynamic electrocardiographic changes of ST-segment depression ≥ 0.5 mm or T-wave inversion ≥ 2 mm

  • Transient ST-segment elevation of ≥ 0.5 mm in more than two contiguous leads

  • Haemodynamic compromise

  • Sustained ventricular tachycardia

  • Syncope

  • Significant left ventricular (LV) dysfunction (LV ejection fraction < 40%)

  • Prior percutaneous coronary intervention within 6 months or prior coronary artery bypass surgery

  • Presence of diabetes or chronic kidney disease (estimated glomerular filtration rate < 60 mL/minute) and typical symptoms of ACS

Intermediate risk

Presentation with clinical features consistent with ACS and any of the following features, without high-risk features:

  • Chest pain or discomfort within the past 48 hours that occurred at rest, or was repetitive or prolonged (but currently resolved)

  • Age > 65 years

  • Known coronary artery disease

  • Prior aspirin use

  • Two or more of: hypertension, family history, current smoking, hyperlipidaemia

  • Presence of diabetes or chronic kidney disease and atypical symptoms of ACS

Low risk

  • Presentation with clinical features consistent with ACS without intermediate-risk or high-risk features. This includes onset of anginal symptoms within the past month, or worsening in severity or frequency of angina, or lowering of anginal threshold.


* Adapted from Box 8 in the National Heart Foundation of Australia and Cardiac Society of Australia and New Zealand Guidelines for the management of acute coronary syndromes 2006.6 Copyright 2006 The Medical Journal of Australia. Used with permission.

4 A proposed algorithm, incorporating an accelerated diagnostic protocol,* for assessment of possible cardiac chest pain after exclusion of ST-segment elevation on initial ECG


NHF/CSANZ = National Heart Foundation and Cardiac Society of Australia and New Zealand. ACS = acute coronary syndrome.
ECG = electrocardiogram. TIMI = Thrombolysis in Myocardial Infarction. * Based on the ADAPT study.11 NHF/CSANZ Guidelines for the management of acute coronary syndromes 2006.6

5 Features of non-invasive tests available for further risk stratification of patients with chest pain, after excluding acute myocardial infarction

Procedural considerations


Relative contraindications*


Condition and test

Cost

Radiation

Iodinated
contrast media

Inability to exercise

Significant resting ECG abnormality

Renal
impairment

Obesity

Severe airway disease


Myocardial ischaemia or perfusion

Exercise stress electrocardiography

$

No

No

Yes

Yes

No

Yes

Yes§

Stress echocardiography

$$

No

No

No

Yes

No

Yes

No

Myocardial perfusion scanning

$$$

Yes

No

No

No

No

Yes

Yes§**

Obstructive coronary artery disease

Computed tomography coronary angiography

$$

Yes

Yes

No

No

Yes

Yes

No


ECG = electrocardiogram. * Relative contraindications should be discussed further for individual patients. Relative cost indications are based on current Medicare rebates. For example, left bundle branch block. § If there is significant functional impairment. If pharmacological stress testing can be performed. ** Adenosine is contraindicated.

6 Representative performance characteristics of non-invasive tests to identify myocardial ischaemia or obstructive coronary artery disease in patients with chest pain

Test

Sensitivity

Specificity


Exercise stress electrocardiography21

68%

77%

Stress echocardiography20

83%

77%

Exercise stress myocardial perfusion scanning19

85%–90%

70%–75%

Computed tomography coronary angiography22

99%

89%

Urban Aboriginal and Torres Strait Islander children’s exposure to stressful events: a cross-sectional study

Adverse life events and chronic stressors experienced during early childhood can negatively affect development.1,2 While some exposure to stressful events can foster resilience,3 exposure to strong, frequent or prolonged stressors in childhood can result in dysregulation of physiological stress response systems,2,4 which can negatively affect the development of social and emotional wellbeing, behaviour, literacy, and physical and mental health.2,4,5 With the strong association between racial inequalities in health and chronic stress,6,7 the inequalities experienced by Aboriginal and Torres Strait Islander peoples compared with non-Indigenous Australians need to be considered in this context.

Aboriginal and Torres Strait Islander peoples experience higher rates of stressful events than the general population, which can, in part, be attributed to the lasting impact of colonisation, intergenerational trauma and ongoing experiences of disadvantage and exclusion.79 The 2010 General Social Survey found that 61% of Australians aged ≥ 18 years had experienced at least one stressful event during the preceding year.10 In comparison, the 2008 National Aboriginal and Torres Strait Islander Social Survey (NATSISS) found that 77% of Indigenous adults and 65% of Indigenous children aged 4–14 years had experienced at least one stressful event,11 and the Western Australian Aboriginal Child Health Survey (WAACHS) found that 71% of children had experienced at least three significant stressors.12 All three surveys used a checklist of negative life events to identify stressful events experienced in the previous year.

Indigenous children living in urban areas experience higher rates of stressful events than their counterparts in rural or remote areas.11,12 However, there is little research investigating their health status, despite the majority of Indigenous Australians living in urban settings and the different social and cultural milieus associated with these communities.13,14 We aimed to determine the frequency and types of stressful events experienced by urban Aboriginal and Torres Strait Islander children, and to explore the relationship between these experiences and the children’s physical health and parental concerns about their behaviour and learning ability.

Methods

This cross-sectional study used data collected during annual child health checks (CHCs) at the Inala Indigenous Health Service (IIHS) in Brisbane. The CHC is a comprehensive health assessment that aims to increase access to preventive health care.15 The IIHS, a Queensland Government general practice service,16 had 867 children listed as regular patients at the time of the study.

We recruited a consecutive sample of children aged ≤ 14 years presenting for CHCs between March 2007 and March 2010, whose parents or carers consented to the CHC information being used for research. Most children had one CHC during the study period; for those who had two or more CHCs, only data from the first visit were included.

Parents or carers were asked if any stressful events had occurred in the family that may have affected the child. Responses to this question were not limited by a time frame of when the events occurred or by use of a checklist of negative life events.

Parents or carers were also asked if the child had a history of chest, ear or skin infections, or injuries or burns, and if they had concerns about the child’s behaviour or learning ability. For school-aged children, parents or carers were asked to compare the child’s school grades to average. The child’s weight and height were measured and body mass index (kg/m2) was calculated. Family groupings of children were identified post-hoc by matching children’s surnames, addresses, known siblings, household size, presentation on the same day for a CHC, or the same stressful events being recorded.

We categorised the reported stressful events and calculated the proportion of children affected by each category of stressor. Using Stata, version 10.0 (StataCorp), we tested for relationships between reported stressful events and the independent variables using binary generalised estimating equation (GEE) methods, nesting children within families, employing exchangeable correlation structures and robust estimators of variance. A two-sided significance level (α) of 5% was used to define statistical significance.

Ethics approval was obtained from the University of Queensland’s Behavioural and Social Sciences Ethical Review Committee and the Metro South Human Research Ethics Committee. The Inala Elders Aboriginal and Torres Strait Islander Corporation supported the project, and results were disseminated back to the Inala Aboriginal and Torres Strait Islander community.

Results

Of the 541 children having CHCs in the study period, parental or carer consent to participate in research was gained for 432 (80%), and 344 (64%) were eligible for this study. These 344 children had a mean age of 7.3 years and were from 247 families. Most children were Aboriginal (312; 91%) and lived with at least one parent (286; 83%) (Box 1). Household size ranged from two to 11 usual members, with a median of five. No sibling was identified for 177 participants (51%); 50 participants (15%) had one sibling, 13 (4%) had two siblings, and seven (2%) had three siblings.

Of the 344 participants, 175 (51%) had experienced stressful events. There were no significant differences in the reported exposure to stressful events between sexes or age groups. Children from single-parent households or with teenage or unemployed parents were also no more likely to have been affected by stressful events than their counterparts (Box 1).

Categories of reported stressful events are shown in Box 2. Of the 175 children who had ever experienced stressful events, 42 (24%) had been affected by conflict in the family, 40 (23%) by the death of a family member or close friend, and 27 (15%) by housing issues, including overcrowding or housing insecurity. Violence or abuse, including domestic violence, had been witnessed by 20 (11%) and personally experienced by 18 children (10%).

Children affected by stressful events were more likely to have parents or carers concerned about their behaviour (P < 0.001) and to have a history of ear (P < 0.001) or skin (P = 0.003) infections (Box 3).

Discussion

About half of the children in this study had ever experienced stressful events. Strong associations were seen between stressful events and a history of ear and skin infections, and parental or carer concerns about the child’s behaviour. No significant differences were seen in the reported exposure to stressful events by individual or familial characteristics.

Compared with the urban Aboriginal and Torres Strait Islander children included in the NATSISS and the WAACHS, our study found a lower rate of stressful events and the absence of some expected stressors.11,12 None of our participants reported racism, trouble with the police or unemployment as stressors, whereas 12%, 16% and 32% of NATSISS respondents, respectively, reported these. In our study, 65% of parents or carers were unemployed, compared with a background unemployment rate for Aboriginal and Torres Strait Islander adults living in Inala of 24% in 2006, and rates of 11% in the broader population of Inala and 4% across Brisbane.17 It is possible that the common experience of unemployment has resulted in it becoming normalised in this group and therefore not considered stressful. However, it may also be an underlying but unacknowledged or unrecognised cause of other stressors such as familial conflict, illness or housing insecurity.

Our study has both strengths and limitations. We used routinely collected clinical data from children attending the health service, thus minimising inconvenience for study participants. Our 344 participants represented 64% of children having CHCs at the IIHS in the study period, and 40% of active patients aged ≤ 14 years. Issues such as the sickness of the presenting child or time constraints of the parent, carer or clinic staff could affect the number of CHCs conducted. Nonetheless, despite our clinic population comprising only 0.8% of Australia’s urban Indigenous children, our service completed 10% of the CHCs done in Australian metropolitan areas to June 2009.15

Our open-ended enquiry about types and frequency of stressful events introduces the potential for recall bias and underreporting. However, such enquiry is likely to elicit events that were particularly notable for the child and family.18 The open-ended nature of the enquiry also precluded assessment of a dose–response relationship between exposure and outcomes, and the cross-sectional nature of the data prevented any determination of causality between exposure and outcomes. The lack of a time frame associated with the reported stressful events also prevents establishing a temporal relationship between exposure and outcome.

Finally, this study represents one urban Indigenous context and may not be generalisable to other urban areas or Indigenous primary health care services, although there is little reason to assume there would be substantial differences in the results.19 These limitations do not negate the seriousness of our findings that about half the children were reported to have been affected by stressful events, and the significant association of this with poorer physical health and parental concerns about behaviour.

As childhood exposure to stress affects future health and wellbeing, longitudinal research is necessary to disentangle the causes and effects of stressful events. Health care services need to respond to any disclosure of stressful events by providing access to appropriate medical, psychological or social interventions, preferably through “in house” health professionals or referral to culturally competent community agencies. However, simply treating the impact of stressful events is insufficient without also dealing with the colonial legacy of displacement, child removal, marginalisation and exploitation that contributes to the excessive rates of transgenerational trauma and socioeconomic disadvantage experienced by Aboriginal and Torres Strait Islander peoples.9,20 The risk of not addressing both the causes and the effects of childhood exposure to stressful events is that the disparity in life expectancy between Indigenous and non-Indigenous Australians is unlikely to improve.8,9

1 Individual and familial characteristics of children having child health checks at Inala Indigenous Health Service, March 2007 – March 2010, by experience of stressful events

At least one stressful event


Characteristic

Overall (n = 344)

Yes (n = 175)

No (n = 169)

P*


Sex

0.40

Male

180 (52%)

94 (54%)

86 (51%)

Female

164 (48%)

81 (46%)

83 (49%)

Age (years)

0.45

≤ 4

107 (31%)

52 (30%)

55 (33%)

5–9

142 (41%)

78 (45%)

64 (38%)

10–14

95 (28%)

45 (26%)

50 (30%)

Ethnicity

0.96

Aboriginal

312 (91%)

157 (90%)

155 (92%)

Torres Strait Islander

7 (2%)

5 (3%)

2 (1%)

Both Aboriginal and Torres Strait Islander

25 (7%)

13 (7%)

12 (7%)

Main carer with whom the child lives

0.07

Parent(s)

286 (84%)

137 (80%)

149 (89%)

Grandparent(s)

15 (4%)

8 (5%)

7 (4%)

Other relative(s)

20 (6%)

15 (9%)

5 (3%)

Friend(s)

1 (0.3%)

1 (1%)

0

In care

17 (5%)

11 (6%)

6 (4%)

Single-parent household

0.10

Yes

151 (44%)

87 (50%)

64 (38%)

No

192 (56%)

87 (50%)

105 (62%)

Employment status of parent(s)/carer(s)

0.10

Employed

121 (35%)

49 (28%)

72 (43%)

Unemployed

222 (65%)

125 (72%)

97 (57%)

Teenage parent(s)

0.23

Yes

14 (4%)

9 (5%)

5 (3%)

No

329 (96%)

165 (95%)

164 (97%)


* Calculated using binary generalised estimating equation methods, clustering children within their families. Three observations missing from the group exposed to stressful events, and two from the group not exposed. One observation missing from the group exposed to stressful events.

2 Frequency of stressful events reported by parents or carers during child health checks for children who had experienced at least one stressful event (n = 175)

Stressful event category

No. (%) of children


Conflict in the family

42 (24%)

Death of family member or close friend

40 (23%)

Parental divorce or separation

28 (16%)

Housing issues (including overcrowding and housing insecurity)

27 (15%)

Lack of emotional support from parents

26 (15%)

Serious illness in the family

23 (13%)

Witness to violence or abuse (including domestic violence)

20 (11%)

Experienced abuse or violent crime (including domestic violence)

18 (10%)

Living away from parents, with other family members

17 (10%)

In foster care

16 (9%)

Alcohol or drug-related problem in the family

13 (7%)

Problems at school

11 (6%)

New to community

10 (6%)

Family member in prison

7 (4%)

Other

11 (6%)

3 Parental concerns about children’s behaviour and learning ability and physical health of children, by experience of stressful events

At least one stressful event


Variable

Overall

Yes

No

P*


Behaviour and learning ability

Parents or carers concerned about behaviour

234

124

110

< 0.001

Yes

69 (29%)

50 (40%)

19 (17%)

No

165 (71%)

74 (60%)

91 (83%)

Parents or carers concerned about learning

236

126

110

0.10

Yes

75 (32%)

47 (37%)

28 (25%)

No

161 (68%)

79 (63%)

82 (75%)

School grades on report card

193

95

98

0.19

Below average

36 (19%)

24 (25%)

12 (12%)

Average or above average

157 (81%)

71 (75%)

86 (88%)

Physical health

Body mass index (BMI) category

292

151

141

0.62

Overweight or obese (BMI > 25 kg/m2)

75 (26%)

37 (25%)

38 (27%)

Normal or underweight (BMI ≤ 25 kg/m2)

217 (74%)

114 (75%)

103 (73%)

History of chest infections

308

156

152

0.10

Yes

33 (11%)

23 (15%)

10 (7%)

No

275 (89%)

133 (85%)

142 (93%)

History of ear infections

313

160

153

< 0.001

Yes

87 (28%)

58 (36%)

29 (19%)

No

226 (72%)

102 (64%)

124 (81%)

History of skin infections

308

158

150

0.003

Yes

67 (22%)

48 (30%)

19 (13%)

No

241 (78%)

110 (70%)

131 (87%)

History of burns or injuries

306

153

153

0.51

Yes

42 (14%)

25 (16%)

17 (11%)

No

264 (86%)

128 (84%)

136 (89%)


* Calculated using binary generalised estimating equation methods, clustering children within their families. Denominators shown for each variable differ due to varying numbers of missing observations. For school-aged children (5–14 years).

Our first National Primary Health Care Strategy: 3 years on, what change for general practice?

Achievements, challenges and missing pieces in the progress of this critical element in our national reform program

Our first National Primary Health Care Strategy was released in 2010, after 2 years of consultation, development and review. In keeping with the National Health and Hospitals Reform Commission report the year before, it identified a strong and evolving primary health care system, anchored by general practice, as being “critical to the future success and sustainability of our entire health care system”.

The Strategy’s five key building blocks for reform are:

  • improving regional integration between providers and services, filling service gaps and driving change

  • more extensive and innovative use of e-health to integrate care, improve patient outcomes and deliver capacity, quality and cost-effectiveness

  • building a flexible and well trained workforce through effective training and teamwork

  • improving the sector’s physical infrastructure

  • a focus on financing and system performance to drive practice and system outcomes.

Three years on, the report card is mixed — there have been promising first steps in some areas, concrete progress in others, and “yet to commence” in those most challenging.

Where are we now?

Regional integration: Sixty-one Medicare Locals have been established to better integrate and coordinate local health services with the state-funded health sector, better support clinicians and service providers to improve patient care, identify and redress unmet need, and improve the focus on prevention and early intervention. Many Medicare Locals have undertaken significant population health planning (particularly regarding after-hours services), and some have made substantial progress on collaboration with secondary services. Others are still occupied with the challenging transition from Divisions of General Practice and engagement with the broader primary care sector.

E-health: Medicare Benefits Schedule (MBS) items and incentives for general practitioners and specialists using videoconference consulting outside metropolitan areas and in residential aged care have been introduced, although access to these was reduced last year to rural and residential care only. In 2012, 17 549 videoconference consultations were undertaken nationally. The personally controlled electronic health record (PCEHR) system was launched in July 2012 and, by February 2013, 56 000 Australians had registered for a PCEHR, supported by 1171 health care organisations and 1325 individual practitioners.1 The new electronic Practice Incentives Program payment framework for general practice, introduced this year, requires significantly more sophisticated messaging capability to meet minimum criteria.

Workforce and infrastructure: Annual GP registrar intake will increase to 1200 places in 2014, although primary care training placements for other health disciplines are limited. The rapid escalation in medical student and registrar numbers has stretched GP trainers to the limit, with strong concerns expressed by both the Australian Medical Association and National General Practice Supervisors’ Association.

The GP Super Clinics program has proven controversial, with 29 of 60 Super Clinics funded by the Department of Health and Ageing currently listed as operational.2 Practice infrastructure grants to expand capacity in over 400 existing general practices nationally have been fully subscribed.

Financing and system performance: Little reform has occurred in this area. Budget cuts have reduced general practice quality incentives and MBS mental health payments. Professional bodies have been involved in robust debate about the decreasing real value of Medicare rebates and appropriate funding support for complex comorbidity management. The Improvement Foundation has continued to develop and implement new Australian Primary Care Collaboratives. This year, responsibility for these will transition to the Australian Medicare Local Alliance, maximising local support. The Australian Commission on Safety and Quality in Health Care’s focus on primary care will also commence in earnest.

Collateral change: Change outside the Strategy’s remit has also occurred. The Royal Australian College of General Practitioners (RACGP) released its blueprint for the general practice sector, endorsed by all GP groups through United General Practice Australia in March 2012.3 Much discussion has centred on its overlap with new international models of primary care, particularly the patient-centred medical home (PCMH).4 New national models of general practice that boost capacity and reach are also evolving.5 Last year, the RACGP launched its pilot of clinical indicators to allow practices to benchmark performance in 22 areas of evidence-based practice.

Links with state health initiatives: Late last year, the Department of Health and Ageing released the National Primary Health Care Strategic Framework for comment.6 It presents priority areas (within the context of the Strategy) for the federal and state governments to pursue together. These include development and promotion of innovative “pathways through care” models, collaboration between local health networks and Medicare Locals to examine innovative care coordination for people with complex chronic conditions, promotion of multidisciplinary teams, support for the continued development of GPs with advanced skills, and development of integrated extended-hours clinics. Funding models incentivising safety and quality, a population health focus and reduced hospitalisation are also discussed. Several states have already undertaken transfer of some state-funded primary care functions to non-government organisations and Medicare Locals. All state and territory governments have until June 2013 to deliver their responses to the Framework.

What difference have we made?

To date, health care consumers will have recognised little tangible change. The general practice workforce in Australia still struggles to meet need, with great variations in access across the country. Depending on location, 3%–15% of Australians delay or avoid seeking general practice care because of cost.7 Many people registered for a PCEHR are awaiting practice management software capability for upload. A few rural Australians, but not those in urban areas, have enjoyed videoconference consultations with a specialist as part of their primary care. Many Medicare Locals are busy transitioning from the Divisions of General Practice from which they evolved. For some communities, this has resulted in reduced service availability; for others, expanded options. While state government response to the Strategy is expected to encourage better care integration between the community and hospital sectors, concrete changes to consumer experience will take time. Timely, safe discharge communication between hospitals and general practice still varies greatly.8

Unfinished business

The sleeping giants within the still-unfolding Strategy are workforce, payment reform, innovation in e-health, and performance measurement.

The estimated number of practice nurses has trebled since 2003, but the GP workforce remains stagnant,9 and accurate figures on allied health growth in primary care are impossible to obtain. Without sufficient capable workforce, expansion of primary care capacity and quality is also impossible.

While most other developed countries have moved to “blended” payments for GP services, Australian primary care funding only minimally supplements fee-for-service (FFS) payment. In the United States, the PCMH funding model complements FFS care with bundled payments for prevention and complex chronic disease care, and performance bonuses for quality care deliverables. Significant additional funding to PCMHs is provided by redirecting savings in secondary care, particularly emergency department visits and inappropriate hospital care. In Ontario, Canada, application of the PCMH model has resulted in significantly increased overall practice funding, with FFS now representing about 40% of individual practitioner income.10

Effective e-health in primary care is more about building patient–provider relationships, clinical communication, access and patients’ engagement in their own good health than about implementation and electronic recordkeeping in isolation. Primary care in North America has demonstrated much greater use than Australia of videoconference consultations, email interaction and patient e-tools — largely due to funding and change management support.

The future

Health care reform is slow, painful and expensive. Much of the activity of the past 3 years has been invested in infrastructure and strategy rather than tangible deliverables. Change is not easy in a time-poor sector with shrinking Medicare support and ever more complex care demands. The challenge for all of us in primary care lies in complementing maximal patient and community contribution to good health with efficient, high-quality, accessible care. This demands a fundamental rethink about the best vehicle with which to deliver the lofty aims we have set. The critical challenges of more appropriately funding primary care for the increasingly complex future role it must play, better incorporating e-health and appropriate performance review, and effective engagement with the secondary and aged care sectors are fundamental to this and seem still a long way off. As the 2013 federal election approaches, the Opposition has signalled a refocus for Medicare Locals (likely to be renamed) around lean administration, strong service delivery, and care integration with local hospital networks where appropriate.

Despite some progress, our National Primary Health Care Strategy has far to travel to reach health consumers with the promised benefits. It is a crucial journey for our health care future, which demands that the rhetoric around emerging reform initiatives be matched with clear policy, workforce support, innovation in e-health and effective incentives. In a very tight fiscal environment, the focus must be on initiatives that demonstrate tangible patient and community benefit, maintain the reform momentum, and support effective, coordinated and efficient care locally. Marathon rather than sprint it will be, and only as successful as the vision and commitment of clinicians, policymakers and consumers of health care to tackle it constructively together.

Why can’t we get permanent general practitioners for our country town?

To the Editor: A rural general practitioner’s workload is significantly larger than that of his or her urban colleagues, and this is attributable
to work activities in rural public hospitals.1 A GP who provides
after-hours on-call service to the community through the local hospital or emergency department is not only valued, but also more likely to be retained in the rural workforce.2 However, on-call commitments and the unrelenting nature of after-hours care can negatively affect professional and personal wellbeing, family life and opportunities to enjoy the rural location.3

I currently work in the city, but did much of my training in rural and regional areas. Doing GP locums allows me to stay in touch with rural and regional practice. However, working as a locum has highlighted to me how arduous on-call commitments can be. When you are working as the solo town doctor, or one of two, there is not much opportunity to share the on-call roster as recommended by the Rural Doctors Association of Australia.4

In my experience, some hospitals have restrictive service contracts, which further contribute to the GP’s burden. For example, the doctor is mandated to be within 10–15 minutes away from the hospital at all times while on-call, and must attend, when requested, within the times specified in the contract. These times are the same as those expected in large urban hospitals.

In large urban tertiary teaching hospitals with on-site doctors, the median time taken for a doctor to attend patients whose condition has deteriorated unexpectedly is 13 minutes. One in five episodes had a recorded response time longer than 30 minutes.5 By requiring on-call GPs to meet or better the expected response times of urban tertiary hospitals, “on-call” in effect becomes “on duty”.

An on-call weekend results in being confined to home from 8 am Friday to 8 am Monday. I empathise with GPs working permanently in a country town and having to cope with such restrictions.

Given what we know about the negative impact of onerous on-call and after-hours commitments on doctors, including GPs, and the subsequent negative effect on workforce retention in rural and remote Australia,2 why are we still setting ourselves up for continuing failure?

Telehealth and equitable access to health care

To the Editor: I write in protest about the short-sighted decision of the Australian Government to remove outer metropolitan areas from eligibility for Medicare rebates for telehealth from 1 January 2013. Since then, there has been a 29% drop in the number of video consultations, with 9476 recorded from 1 January to 28 February 2013, compared with 13 311 from 1 November to 31 December 2012.1

Telehealth is usually proposed as a tool suitable for the rural and remote locations of Australia, providing lower costs, increased access to specialists, improved collaboration, increased quality of local service and greater access to professional development.2 Yet barriers to care are more than geographical; they can be temporal, financial and cultural.3 In particular, the health care system remains inequitable while patients with disabilities face a range of barriers in achieving access.4 Telehealth assists in overcoming these barriers, enabling improved access to care in urban areas.5

We instituted a telehealth service in psychiatry and pain management to a general practice super clinic located
in the City of Playford council area in Adelaide’s outer northern suburbs. This area is underserved, with a low socioeconomic status and poorer health outcomes.6 The South Australian branch of the Royal Australian and New Zealand College of Psychiatrists, and the Australian Pain Society, informed us that there were no private locally resident or visiting specialists in this area, and the referring general practitioners said that the particular patients seen would not otherwise have accessed specialist care. They included patients who were homeless, patients with disabilities or those who lacked their own transport.

I understand that the boundary changes were intended not to undermine existing outer metropolitan private specialists but,
in this case, we were clearly able to bring private specialist resources into the area with benefit to patients. Additionally, we recruited health
care students on clinical attachments to assist patients with their teleconsultations. At the same time, the students were able to learn from the specialists during the video communication sessions.

I propose two recommendations that would result in telehealth increasing equitable access to care: first, that underserved outer metropolitan areas of low socioeconomic status be reinstated for Medicare rebates for telehealth, and second, that patients with a disability should be eligible for video consultations regardless of their place of residence.

Primary prevention of cardiovascular disease: new guidelines, technologies and therapies

Acontinuing trend in primary prevention of cardiovascular disease (CVD) in general practice has been the move away from managing isolated CVD risk factors, such as hypertension and dyslipidaemia, towards assessment and management of these factors under the banner of absolute CVD risk.1 This has been underscored by the publication of guidelines for assessment and management of absolute risk.2,3 These guidelines seek to consolidate various individual disease and risk factor guidelines, recognising CVD as a common end-disease pathway and, therefore, the benefit of taking a common absolute risk-based approach. The rationale behind adopting this approach can be summarised as follows:

  • Medication is best initiated in those most likely to benefit from it, and who therefore have a favourable risk-to-benefit ratio.

  • It is more cost-effective than intervention for single risfactors.

  • It avoids medicalisation of the low-risk population.

  • It better identifies those most likely to have covert CVD, avoiding costly additional investigations.

  • Beneficial therapeutic agents can be initiated at a level above the ideal rather than at an arbitrary cut point.

  • Due attention is paid to CVD risk, which might otherwise be subsumed within a particular chronic disease management strategy (eg, diabetes and blood glucose levels)4 (see example in Box 1).

Here, we provide information for general practitioners on new approaches to clinical management of CVD risk factors in patients without overt disease, and new technologies and therapies to assess and manage them.

The Australian absolute CVD risk guidelines

In the Australian National Vascular Disease Prevention Alliance (NVDPA) guidelines for assessing absolute CVD risk, absolute risk is calculated as the probability of a stroke, transient ischaemic attack, myocardial infarction, angina, peripheral arterial disease or heart failure occurring within the next 5 years.2 Absolute risk is categorised, and can be communicated to patients, as low (< 10%), moderate (10%–15%) or high (> 15%). Medication is recommended for individuals at high risk and sometimes for those at moderate risk if additional risk factors are at play (eg, Aboriginal or Torres Strait Islander status or a family history of premature CVD).

Doctors can reliably estimate relative risk — that is, the risk level of an individual with a risk factor compared with someone who does not have that risk factor.5 The problem with relative risk is that it tells you that a smoker is at greater risk than a non-smoker but does not convey what that risk actually is. The absolute CVD risk calculator recommended by the NVDPA (http://www.cvdcheck.org.au) is based on the Framingham Heart Study.2,6,7 It has good predictive value for subsequent CVD events in untreated individuals and has been validated in the Australian population aged 30–74 years.8

If a patient has manifest CVD (eg, past history of stroke or myocardial infarction) or already has a condition that places him or her at high risk of CVD (Box 2), then no risk assessment is required before commencing blood pressure (BP)-lowering and/or lipid-lowering therapy. The NVDPA assessment guidelines recommend that all other adults aged 45–74 years should be assessed for cardiovascular risk.2 Below the age of 45 years, almost all patients will be at low risk. For people older than 74 years, the guidelines recommend entering their age as 74 in the calculator, to provide a minimum estimate of risk.

All attempts at recalibrating calculators for Aboriginal or Torres Strait Islander peoples have so far failed, with recognition that risk is underestimated in this population.9 Assessment should commence in Aboriginal or Torres Strait Islander adults at the age of 35 years (in deference to the reduced life expectancy in this population) and the score used as a minimum estimate of risk.

Barriers to uptake of the absolute risk approach, such as acceptability, feasibility and effectiveness in the primary care context, need to be overcome.10 Depending on the clinical context, treatment on the basis of elevated single risk factors may still be appropriate. For example, atrial fibrillation has a well recognised thromboembolic stroke risk, which warrants a disease-specific stroke and bleeding risk assessment for anticoagulant or antiplatelet therapy.

Recommended changes to clinical practice from the 2012 management guidelines

The 2012 NVDPA guidelines for managing absolute CVD risk recommend both BP-lowering and lipid-lowering agents for all patients at high absolute risk of CVD, unless contraindicated or clinically inappropriate.3 For patients at moderate risk, treatment with a BP-lowering and/or lipid-lowering agent should be considered if the risk remains elevated after lifestyle interventions, BP is ≥ 160/100 mmHg, there is a family history of premature CVD, or the patient is of South Asian, Middle Eastern, Maori, Pacific Islander, Aboriginal or Torres Strait Islander ethnicity. The guideline authors recommend that people with a BP of ≥ 160/100 mmHg be treated for their BP regardless of their absolute risk level. The 2012 guidelines have also revised and simplified BP targets to aim for with BP-lowering treatment and lifestyle measures: for the general population or those with a reduced glomerular filtration rate, the target is ≤ 140/90 mmHg; and for people with microalbuminuria, macroalbuminuria or diabetes, the target is ≤ 130/80 mmHg.

Aspirin and other antiplatelet agents are no longer routinely recommended for use in primary prevention of CVD, including for people with diabetes or high absolute CVD risk. Previous recommendations for people with diabetes were based on the assumption of equivalent CVD risk to those with established CVD but without diabetes.11 However, recent primary prevention trials in patients with diabetes have not shown benefit for aspirin.12,13 Harm–benefit analyses of antiplatelet drugs for primary prevention assume that risk of CVD rises with age but risk of adverse effects does not. While it is true that CVD risk is largely determined by age, the risk of adverse effects is also likely to be higher in older people.14 An ongoing clinical trial, Aspirin in Reducing Events in the Elderly (ASPREE), is being conducted in Australian general practice to examine whether the benefits of routine aspirin outweigh the harms in patients aged 70 years or older.15

Lifestyle interventions

Regardless of a patient’s risk level, the advice in the 2012 NVDPA guidelines remains that treatment should always begin with lifestyle interventions, such as smoking cessation; reducing intake of dietary salt, fat, high-calorie drinks and overall calories; and increasing exercise. Most people at moderate absolute risk should be given the opportunity to reduce their risk by following lifestyle advice, with drug therapy only considered if their risk does not reduce in 3–6 months or if they have specific additional risk factors, such as Aboriginal or Torres Strait Islander status or a family history of premature CVD.

Smoking is the most important modifiable risk factor, and action on smoking is always the highest-priority lifestyle intervention. Smoking cessation reduces the risk of CVD substantially and sustainably, and it also reduces all-cause mortality.16 Health professional advice, nicotine replacement therapy and medication are effective smoking cessation interventions.1719

Weight loss is important in that it reduces the risk of elevated BP and lipid levels and diabetes. Even modest weight loss (5%–10% of initial weight) can improve health.20 There are no simple answers to the question of which diet will achieve weight loss. Whichever diet is chosen, it needs to be sustainable to be effective. There is some evidence that low-carbohydrate–high-protein diets, such as the CSIRO (Commonwealth Scientific and Industrial Research Organisation) diet, have greater weight loss and lower attrition rates in the short term, but longer-term evidence is lacking.21

Weight-loss medications available to date have been disappointing because of the lack of sustained weight loss and the risk of side effects. Several weight-loss medications have been withdrawn from market due to harmful effects, the most recent being sibutramine.22 In addition, weight loss achieved using medication is unlikely to have the same health benefits as weight loss achieved by diet and exercise, with all their associated benefits for health and wellbeing. The draft National Health and Medical Research Council Clinical practice guidelines for the management of overweight and obesity in adults, adolescents and children in Australia23 recommend orlistat as an agent with proven effectiveness in adults,24 although its use will be limited by the acceptability of side effects, such as flatulence and anal leakage.

Weight-loss surgery has shown promise for patients with significant obesity. The Swedish Obese Subjects study found average weight loss from various types of bariatric surgery of 14%–25% over 10 years, and a reduction in all-cause mortality, diabetes and CVD.25 However, this was not a randomised controlled trial, and the intensity of monitoring and follow-up of patients may influence the generalisability of the study results. Weight-loss surgery is recommended if a patient has a body mass index > 40 kg/m2, or > 35 kg/m2 with comorbidity.26

Regular physical activity reduces CVD risk and individual CVD risk factors and protects against other diseases.3 Health benefits are achieved with around 150–300 minutes of moderate-intensity activity or 75–150 minutes of vigorous activity each week.3,23

How do guideline recommendations align with prescribing criteria for lipid-lowering drugs?

In 2006, the Pharmaceutical Benefits Advisory Committee (PBAC) revised the Pharmaceutical Benefits Scheme (PBS) General statement for lipid-lowering drugs prescribed as pharmaceutical benefits.27 This revision aimed to bring the PBS prescribing criteria for lipid-lowering drugs more in line with the absolute risk approach, while recognising that, at the time, a lack of widespread access to a CVD risk calculator was a barrier to using absolute risk as a prescribing criterion. Conditions considered in the NVDPA guidelines2,3 to confer a high risk of CVD that are not currently included in the PBS criteria are: moderate or severe chronic kidney disease; total cholesterol level > 7.5 mmol/L in males who are less than 35 years old and in premenopausal women; and systolic BP ≥ 180 mmHg and total cholesterol level < 6.5 mmol/L, or total cholesterol level < 5.5 mmol/L and high-density lipoprotein cholesterol level > 1.0 mmol/L.

To date, what has not been presented to the PBAC for consideration is the effectiveness and cost-effectiveness of lipid-lowering treatments for patients at high (or moderate) absolute risk with “normal” lipid levels. From our previous analysis of the Australian Diabetes, Obesity and Lifestyle Study (AusDiab) cohort, about 90% of patients at high absolute risk who do not meet the current PBS criteria for prescription of lipid-lowering drugs are in this category.28

How frequently should absolute CVD risk
be monitored?

Once management decisions have been made, absolute risk should be monitored according to the recommendations in Box 3. The reassessment of absolute risk in the absence of a trigger such as initiation of smoking or diabetes diagnosis may be conducted at longer intervals than currently recommended, especially in low-risk individuals, as reclassification (ie, moving from low to moderate or moderate to high risk), which would lead to management changes, is likely to be an infrequent phenomenon.29 If a patient is already being treated for elevated BP or lipid levels, the pretreatment values should be used to calculate absolute risk.

Is there evidence for the absolute CVD
risk approach?

Using the absolute risk approach, patients who have isolated elevated risk factors, but low absolute risk, will generally not be treated with medication. Because age is such a strong predictor of risk, this means that younger patients with isolated elevated risk factors will in general not be treated with BP-lowering or lipid-lowering agents. Many clinicians may be uncomfortable with this approach, as they feel that delaying treatment until it reaches a particular risk threshold can allow irreparable damage to occur. It is unlikely that there will be a randomised controlled trial of the absolute risk approach versus the isolated risk factor approach to test this, because of the sample size and time that would be required.

However, previously conducted trials support the absolute risk approach. Individual patient data (IPD) meta-analyses of both BP-lowering and lipid-lowering drug trials have shown that the relative risk reduction of cardiovascular events is consistent regardless of baseline BP or lipid levels. The IPD meta-analysis of BP-lowering drug trials showed that the relative risk reduction was constant down to the lowest BP levels observed in the trials (110 mmHg systolic and 70 mmHg diastolic), and that results were consistent in trials of patients with a prior history of coronary heart disease or stroke and those with no prior history of vascular disease.30 The same result has been observed in cohort studies.31 Similarly, the recently updated IPD meta-analysis of lipid-lowering drug trials by the Cholesterol Treatment Trialists’ Collaboration confirms that the relative risk reduction is consistent in patients with or without pre-existing CVD and is independent of the baseline cholesterol level.32 This study provided further empirical evidence to support the absolute risk approach, by showing a constant relative risk reduction regardless of the baseline risk of a cardiovascular event, and therefore increasing benefits from treatment in patients with increased absolute risk of CVD.

New technologies for CVD risk factors and
risk assessment

New technologies that are currently having an impact on BP management in general practice are ambulatory BP devices and oscillometric BP devices (for both clinic and home use).33,34 These devices permit an estimation of BP that is more representative of usual BP and associated CVD risk, through a combination of reducing “white coat” effects and observer error, allowing systematic collection of multiple BP recordings and, for measures made outside the clinic, identification of masked hypertension. Ambulatory BP monitoring involves measuring BP at regular intervals over a 24-hour period while patients undergo normal daily activities, including sleep. Home BP monitoring is a validated method for monitoring and managing a patient’s BP, which can be readily incorporated into practice. Where barriers to ambulatory and home BP monitoring exist, oscillometric devices can be used to approximate mean daytime ambulatory BP.35 This “automated office BP” measurement has three basic principles: multiple BP readings are taken; an automated device is used; and measurements are taken while the patient rests quietly alone. The oscillometric device distributed by the High Blood Pressure Research Council of Australia can be used in this way. The machine can be set to automatically record three BP measures at 5-minute intervals. The patient is then left to sit alone for 15 minutes in a room or a screened area, and the BP value displayed after this time is the average of all recordings. These devices can also be used as a screening device for peripheral arterial disease.36 Users should be aware that BP levels measured this way are generally 5 mmHg lower than clinic measures.37

Multiple clinical, biomarker and imaging tests have been proposed as methods for identifying patients at high risk of CVD. Of most clinical use would be tests that could more effectively discriminate moderate-risk patients who are actually at high or low risk of a cardiovascular event. A series of recent reviews has shown that many of the studies aiming to improve identification of patients at increased risk by using non-traditional risk factors, such as C-reactive protein (CRP) and fibrinogen, had methodological flaws and that, on the evidence to date, these factors were unlikely to improve the discrimination of risk.38 Similarly, using apolipoproteins A and B reclassifies less than 1% of patients beyond the classification based on traditional risk factors.39 The cost, inconvenience to patients and potential harm mean that calls for these tests to be used more widely are premature. Biomarkers already in use in general practice, such as CRP, add very little to current risk algorithms.40 The use of computed tomography coronary angiography to screen patients needs careful evaluation of cost and radiation risk before implementation.41 Coronary artery calcium scoring may have a future role in reclassification for individuals found to be at moderate risk using routine risk stratification.42

New therapies for CVD risk factors

People who regularly consume fish have lower CVD event rates than non-consumers.43,44 However, intervention trials of fish oils are less convincing. A meta-analysis of 48 randomised controlled trials showed no benefit of omega-3 fats on mortality or cardiovascular events in patients with or without existing coronary heart disease.45 Therefore, in primary prevention, it is justified to recommend the consumption of fish as part of a healthy diet, without the need to use fish oil supplements.

Denervation of the kidney using minimally invasive devices has BP-lowering effects in the majority of treated individuals, but it may also have benefits for glucose metabolism, renal impairment, left ventricular hypertrophy, and other conditions.46 This method is still early in its development and availability.

Conclusion

The move to an approach based on absolute risk for the primary prevention of CVD is likely to improve the effectiveness and cost-effectiveness of treatment, and the 2009 and 2012 NVDPA guidelines support this approach. The absolute risk approach targets the patients who are most likely to benefit from medication, and reduces the medicalisation of patients at low risk. The increasing availability of cardiovascular risk calculators, either on the internet or as standalone software, also removes one of the barriers to implementing the absolute risk approach. New technologies have varying evidence of utility, but oscillometric BP devices can be readily adopted. The role of coronary artery calcium scoring and other biomarkers in risk stratification is yet to be established.

1 Case example: how the absolute risk approach better targets therapy

Joe is a 64-year-old man who smokes but does not have diabetes
or known cardiovascular disease (CVD). His blood pressure is
136/82 mmHg, total cholesterol level is 5.4 mmol/L and high-density lipoprotein (HDL) cholesterol level is 1.0 mmol/L.

Jane is a 46-year-old woman who does not smoke and does not have diabetes or known CVD. Her blood pressure is 142/82 mmHg, total cholesterol level is 6.5 mmol/L and HDL cholesterol level is 1.4 mmol/L.

Using the isolated risk factor approach, other than smoking, Joe has
no elevated individual risk factors that would warrant treatment with medication. Jane, on the other hand, has hypercholesterolaemia and hypertension that would see her taking lifelong antihypertensive and lipid-lowering therapy.

However, using the absolute risk approach, Joe’s absolute risk is high (21%) and Jane’s is low (3%). Joe requires medication in addition to lifestyle changes, while Jane needs attention paid to her antecedent risk behaviour rather than medication.

2 Conditions conferring a high risk of cardiovascular disease*

  • Diabetes and age > 60 years

  • Diabetes with microalbuminuria (> 20 µg/min or urinary albumin : creatinine ratio > 2.5 mg/mmol for males, > 3.5 mg/mmol for females)

  • Moderate or severe chronic kidney disease (persistent proteinuria or estimated glomerular filtration rate
    < 45 mL/min/1.73 m2)

  • A previous diagnosis of familial hypercholesterolaemia

  • Systolic blood pressure ≥ 180 mmHg or diastolic blood pressure ≥ 110 mmHg

  • Serum total cholesterol level > 7.5 mmol/L


* Reproduced with permission from section 5.2 of the Guidelines for the assessment of absolute cardiovascular disease risk. © 2009 National Heart Foundation of Australia.2

3 Recommended frequency of monitoring for absolute cardiovascular disease (CVD) risk*

Regular review of absolute CVD risk is recommended at intervals according to the initial assessed risk level:

  • Regular review of absolute CVD risk is recommended at intervals according to the initial assessed risk level:

  • Low (< 10% risk of cardiovascular event within 5 years): review every 2 years

  • Moderate (10%–15% risk of cardiovascular event within
    5 years): review every 6–12 months

  • High (> 15% risk of cardiovascular event within 5 years): review according to clinical context


* Reproduced with permission from Practice point (f) of the Guidelines for the assessment of absolute cardiovascular disease risk. © 2009 National Heart Foundation of Australia.2

Sevenfold rise in likelihood of pertussis test requests in a stable set of Australian general practice encounters, 2000–2011

Pertussis, commonly known as whooping cough, is caused by the small, gram-negative coccobacillus Bordetella pertussis. Classic whooping cough illness is characterised by intense paroxysmal coughing followed by an inspiratory “whoop”, especially in young children or those without prior immunity, followed by a protracted cough.1,2 It is now more widely understood that these characteristic symptoms are not always present during B. pertussis infection, and that individuals may only have symptoms similar to those of the common cold or a non-specific upper respiratory tract infection.2

In recent years, rates of pertussis notifications have increased dramatically across Australia and in many other parts of the world.36 The rise has been seen in all Australian states and territories, with the highest notification rates in children aged under 15 years.7 Although increased notifications may be due to a true increase in circulating B. pertussis, it is possible that the magnitude of the increase has been amplified by better recognition of disease and more frequent testing.8

Historically, the diagnostic gold standard for pertussis laboratory testing was bacterial culture from nasopharyngeal secretions during the early phases of infection (Weeks 1 and 2), and serological testing was used as an alternative diagnostic method during later phases of infection.2 However, even with ideal specimen collection, transport and handling, culture has low sensitivity and does not provide timely results. Although serological testing is more sensitive, sensitivity and specificity may be lowered depending on the timing of specimen collection and the patient’s infection and vaccination history.9 Polymerase chain reaction (PCR) testing has emerged as a key diagnostic method, and respiratory specimens are now commonly tested for pertussis using PCR in Australia and other countries.2,4,10 PCR testing provides more sensitive and rapid results than culture and serological testing. Also, PCR allows less invasive specimen collection — especially useful in younger age groups, in whom infection rates are high and serum collection may be challenging.1

The key datasets used to monitor pertussis incidence and epidemiology in Australia — pertussis notifications, and pertussis-coded hospitalisations and deaths — are populated by positive test results from laboratories and, as such, are not independent of changes in testing practices. Without negative test results or other denominator data to assess changes in testing behaviour, it is difficult to distinguish changes in recorded disease incidence that are due to the effect of increased testing from any true increase in disease.

To better understand the role testing behaviour has on current pertussis epidemiology in Australia, we investigated pertussis testing trends in a stable set of general practice encounters. We hypothesised that the likelihood of pertussis testing, in a stable set of encounters that were most likely to result in a pertussis test request, has increased over time and that this may have led to amplification of laboratory-confirmed pertussis identification in Australia.

Methods

We analysed data from the Bettering the Evaluation and Care of Health (BEACH) program and the National Notifiable Diseases Surveillance System (NNDSS).

The BEACH program is a continuous cross-sectional national study that began collecting details of Australian general practice encounters in April 1998. Study methods for the BEACH program have been described elsewhere11 and are summarised in Appendix 1.

Initially, all encounters for which a pertussis test (ICPC-2 [International Classification of Primary Care, Version 2] PLUS code R33007 [ICPC-2 PLUS label: Test;pertussis]12) was ordered in the period April 2009 – March 2011 were identified and examined. During this period, 30 BEACH problems resulted in a pertussis test request at some time, and nine problems accounted for 90.9% of all pertussis test requests in the dataset. Four other problems, for which a pertussis test request was made at more than 1% of general practice management occasions of that problem, were added (Appendix 2). The 13 selected problems accounted for 92.3% of pertussis tests ordered between April 2009 and March 2011. These were labelled “pertussis-related problems” (PRPs) and data for these problems at encounters recorded between April 2000 and March 2011 were extracted.

BEACH data were grouped into two pre-epidemic periods (before the start of the national pertussis outbreak in 2008) and three epidemic years. During the pre-epidemic periods (April 2000 – March 2004 and April 2004 – March 2008), testing proportions were constant. For each pre-epidemic period and epidemic year, the proportion of PRPs with a pertussis test ordered and the proportion of BEACH problems that were PRPs were calculated. The proportions of PRPs with a pertussis test ordered were grouped into clinically meaningful age groups: 0–4 years, 5–9 years, 10–19 years, 20–39 years, 40–59 years, and ≥ 60 years.

The NNDSS collates notifications of confirmed and probable pertussis cases received in each state and territory of Australia under appropriate public health legislation.13 Notified cases meet a pertussis case definition, which requires: laboratory definitive evidence; or laboratory suggestive evidence and clinical evidence; or clinical evidence and epidemiological evidence (Appendix 3).

To match the BEACH years, all Australian pertussis notifications between April 2000 and March 2011 were extracted from the NNDSS database, including data on age and laboratory testing method. Pertussis notifications were aggregated by month and year, by age group, and by laboratory test method (serological, PCR, culture or unknown). Notifications that had more than one testing modality reported were classified into a single test category using the following hierarchy: culture, PCR, serological, unknown. A total of 1318 notifications were coded only as “antigen detection”, “histopathology”, “microscopy”, “not done” or “other” (epidemiologically linked cases); these were excluded from the analysis as they accounted for only 0.9% of notifications over the study period. The rates of pertussis notifications per 100 000 population were then calculated for each pre-epidemic period and epidemic year.

Temporal changes in the proportions of PRPs with a pertussis test ordered and the rates of pertussis notifications were assessed with a non-parametric test for trend over the whole study period and by calculating odds ratios with robust 95% confidence intervals. Correlation coefficients were calculated to determine the relationship between BEACH and NNDSS datasets. The BEACH analyses incorporated an adjustment for the cluster sample design. Initial BEACH analyses were performed using SAS version 9.1.3 (SAS Institute). Subsequent BEACH and NNDSS analyses were performed using Microsoft Excel and Stata version 11 (StataCorp).

During the study period, the BEACH program had ethics approval from the University of Sydney Human Research Ethics Committee and the Australian Institute of Health and Welfare Ethics Committee. This particular study was approved by the University of Queensland Medical Research Ethics Committee.

Results

The PRPs captured an average of 90.7% of all BEACH problems with a pertussis test ordered each year (range, 87.7%–92.7%) between April 2000 and March 2011 (Box 1). During the study period, PRPs as a proportion of all BEACH problems remained stable, with an annual average of 8.0% (range, 7.7%–8.7%).

When the BEACH data were grouped into pre-epidemic periods and epidemic years, the proportion of PRPs with a pertussis test ordered increased about sevenfold — from 0.25% to 1.71% — when comparing April 2000 – March 2004 and April 2010 – March 2011(Box 2, Box 3). This increase was highly correlated with NNDSS pertussis notification rates (correlation coefficient [r], 0.99), which increased about fivefold during the same period (Box 3). A highly significant trend was detected for changes in BEACH pertussis test requests (P < 0.001) and NNDSS notification rates (P < 0.001) from April 2000 to March 2011.

In the age-specific analysis, there were significant increases in laboratory-confirmed pertussis notifications and in the likelihood of pertussis test requests across all age groups during the study period (Box 3). When comparing April 2000 – March 2004 and April 2010 – March 2011 pertussis testing rates, the largest increase was in 5–9-year-olds (odds ratio, 11.6; 95% CI, 4.2–36.7), followed by 0–4-year-olds, 40–59-year-olds and ≥ 60-year-olds. With the exception of 5–9-year-olds, the increase in pertussis testing exceeded notification changes in all age groups.

Numbers of NNDSS pertussis notifications fluctuated over the study period (Box 4). From 2008 onwards, there was a clear increase in the numbers of PCR-confirmed notifications. Before April 2008, most NNDSS notifications were confirmed by serological testing (66.0%–80.7% of all notifications annually). The proportion of notifications confirmed by PCR increased from 16.3% in April 2000 – March 2004 to 65.3% in April 2010 – March 2011 (Box 1). The proportion of notifications confirmed by culture did not change, and accounted for an average of 2.0% of all notifications over the study period.

Discussion

Consistent with experience in other developed countries,4,1416 we found that rates of pertussis notifications in Australia increased dramatically in recent years, from an average annual rate of 34 per 100 000 population between April 2000 and March 2004 to 158 per 100 000 population between April 2010 and March 2011. Our results cast some light on the potential role that the increasing likelihood of a pertussis test request may have on this change.

Using BEACH data, we found that individuals presenting to an Australian general practitioner between April 2010 and March 2011 were seven times more likely to have been tested for pertussis than 10 years earlier. This finding was within a set of general practice problems that remained stable as a proportion of all problems. This increased likelihood of pertussis testing, most evident in the epidemic years from April 2008 onwards, reached a maximum in the period April 2010 – March 2011, when pertussis tests were ordered in 1.71% of PRPs. A particular strength of our findings is that we used a data source that does not rely on laboratory testing, unlike most other datasets used to monitor pertussis in Australia and elsewhere.4,7,10,13

The increased likelihood of testing in general practice coincided with an increasing proportion of NNDSS pertussis notifications being confirmed by PCR, from 16.3% between April 2000 and March 2004 to 65.3% between April 2010 and March 2011. A review of pertussis cases in New South Wales during the period 2008–2009 also showed a shift away from serological testing (the predominant method before 2008) to PCR testing from 2008 onwards.10

Pertussis notification rates and the likelihood of testing varied considerably across age groups. There was a dramatic increase in notification rates in 0–4-year-olds and 5–9-year-olds from the 2004–2008 pre-epidemic period to the 2008–2009 epidemic year, compared with a moderate increase in testing, which indicates that there probably was a true increase in disease for these groups during this period. It is possible that a real increase in 0–4-year-olds and 5–9-year-olds early on prompted increased disease awareness among GPs, leading to widespread increases in testing across all ages. A positive feedback loop due to increased testing — leading to increased disease detection and awareness, leading to increased testing, and so on — may have been established. This interpretation is supported by the observation that although testing continued to increase from the epidemic year 2008–2009 to the epidemic year 2009–2010, there was little change in notifications, suggesting an increase in testing during that period rather than an increase in disease. In the other age groups, an increase in testing appeared to be responsible for magnified pertussis notifications. A study of pertussis resurgence in Toronto, Canada, also described this phenomenon and concluded that, although there had been true increase in local disease activity, the apparent size of the increase had been magnified by an increase in the use of pertussis testing and improvements in test sensitivity.4

In Australia, public funding for pathology laboratories to use PCR to test specimens for pertussis (and other pathogens) commenced under the Medicare Benefits Schedule (MBS) in November 2005.17 This specific reimbursement for PCR testing (MBS item 69494) — a Medicare fee of $28.65 compared with $22.00 for culture (MBS item 69303) and $15.65 for serological testing (MBS item 69384)17 — may have been an incentive for laboratories across Australia to implement PCR testing more routinely. In addition, during the 2009 H1N1 influenza pandemic, public funding was allocated for the purchase of laboratory equipment (notably PCR suites), but much of the funding was not received by laboratories until after the demand for pandemic influenza testing had subsided.18 New PCR capacity may have provided an increased opportunity for laboratories to conduct PCR testing for other pathogens, such as B. pertussis.

Several factors may have contributed to an increase in disease during the study period. Pertussis laboratory testing methods have been documented to vary between children and adults. While, historically, culture would have been preferred to serological tests for the very young,19 children now are more likely to be tested using PCR, and adults are predominantly tested using serological tests.10 The variation in testing and notification rates across age groups may be due to differences in susceptibility and immunity.20 Pertussis vaccination does not provide lifelong immunity against infection, with protection waning between booster doses.21 Waning immunity may partially explain differences in pertussis incidence between age groups, with older individuals having lower immunity due to longer periods since vaccination.20 Furthermore, there is evidence to suggest that whole-cell pertussis vaccine formulations used in Australia and overseas before the late 1990s were more protective against B. pertussis infection than currently used acellular pertussis vaccines,14,2224 resulting in immunity levels waning faster in some age cohorts due to changes in vaccination schedules.25 In addition, a recent analysis of B. pertussis isolates collected in Australia between 2008 and 2010 indicates that there has been increasing circulation of vaccine-mismatched strains, hypothesised to be due to the selective pressure of vaccine-induced immunity.26

While these or other factors may have led to a true increase in disease during the study period, our data suggest that increased testing, most likely due to expanding use of PCR during the study period, has almost certainly amplified the magnitude of notified pertussis activity in Australia. This increase in testing might have led to identification of illness that would have otherwise gone undetected among age groups in which pertussis circulates widely or age groups in which pertussis had previously been largely left as a clinical diagnosis.

Our findings have global implications, particularly for countries with high or expanding PCR availability. They highlight the critical importance of analysing changes in infectious diseases using a range of surveillance systems. By monitoring changes in laboratory testing and using surveillance datasets that do not rely on laboratory test results, it is possible to determine whether increases in notifications for diseases such as pertussis are due to a true increase in disease, an increase in testing, or a combination of both.

1 PRPs as a proportion of all BEACH problems with a pertussis test ordered and as a proportion of all BEACH problems, and NNDSS PCR tests as a proportion of all NNDSS pertussis notifications, April 2000 to March 2011

Period
(April – March)

PRPs as a proportion of all BEACH problems with a pertussis test ordered
(total no. of pertussis tests)

PRPs as a proportion
of all BEACH problems
(total no. of PRPs)

NNDSS PCR tests as a proportion of all NNDSS pertussis notifications (total
no. of pertussis notifications)


2000–2004

89.4% (141)

8.7% (51 396)

16.3% (16 983)

2004–2008

92.1% (216)

7.9% (45 872)

11.3% (31 559)

2008–2009

87.7% (114)

7.9% (12 551)

55.4% (17 945)

2009–2010

92.7% (164)

7.8% (12 228)

55.8% (22 754)

2010–2011

91.7% (216)

7.7% (11 557)

65.3% (33 641)

PRP = pertussis-related problem. BEACH = Bettering the Evaluation and Care of Health. NNDSS = National Notifiable Diseases Surveillance System. PCR = polymerase chain reaction.

2 Proportions of BEACH PRPs with a pertussis test ordered, and NNDSS pertussis notification rates, April 2000 to March 2011


BEACH = Bettering the Evaluation and Care of Health. PRP = pertussis-related problem.
NNDSS = National Notifiable Diseases Surveillance System. * Data for 2000–2004 and 2004–2008 are averaged annual rates, and data for 2008–2009, 2009–2010 and 2010–2011 are annual rates.

3 Proportions of BEACH PRPs with a pertussis test ordered, and NNDSS pertussis notification rates per 100 000 population, by age group, April 2000 to March 2011

Period (April – March)


Odds ratio
(95% CI)

Correlation coefficient (r)§

Pre-epidemic period*


Epidemic year


Age group

Dataset

2000–2004

2004–2008

2008–2009

2009–2010

2010–2011


0–4 years

BEACH

0.16%

0.12%

0.48%

0.89%

1.31%

8.0 (3.9–17.2)

0.89

NNDSS

44.97

35.78

244.23

225.60

299.17

4.7 (4.3–5.2)

5–9 years

BEACH

0.16%

0.22%

0.78%

2.61%

1.87%

11.6 (4.2–36.7)

0.75

NNDSS

29.68

17.55

202.17

260.06

507.62

14.2 (12.8–15.7)

10–19 years

BEACH

0.36%

0.36%

1.27%

1.95%

2.05%

5.7 (2.8–11.4)

0.88

NNDSS

82.29

38.24

126.87

134.15

226.45

2.2 (2.1–2.3)

20–39 years

BEACH

0.33%

0.54%

0.92%

1.10%

1.76%

5.4 (3.5–8.5)

0.98

NNDSS

22.58

34.51

64.41

84.42

105.60

2.8 (2.6–3.0)

40–59 years

BEACH

0.25%

0.65%

1.05%

1.50%

2.09%

8.5 (5.2–14.0)

0.99

NNDSS

29.17

54.60

82.12

113.33

153.61

3.2 (3.0–3.4)

≥ 60 years

BEACH

0.19%

0.41%

0.50%

0.54%

1.43%

7.6 (4.3–13.7)

0.90

NNDSS

16.12

50.15

76.57

103.49

142.84

5.6 (5.1–6.2)

All ages

BEACH

0.25%

0.43%

0.80%

1.24%

1.71%

7.0 (5.5–8.8)

0.99

NNDSS

33.73

42.31

88.86

108.56

158.42

3.2 (3.2–3.3)


BEACH = Bettering the Evaluation and Care of Health. PRP = pertussis-related problem. NNDSS = National Notifiable Diseases Surveillance System. * NNDSS data are average notifications per 100 000 population per year. NNDSS data are notifications per 100 000 population per year. Comparison
of 2000–2004 and 2010–2011 data. § Correlation between BEACH and NNDSS data.

4 NNDSS pertussis notifications by laboratory test method, April 2000 to March 2011


NNDSS = National Notifiable Diseases Surveillance System. PCR = polymerase chain reaction.

Direct-to-consumer genetic testing — where should we focus the policy debate?

What are the implications for health systems, children and informed public debate?

Until recently, human genetic tests were usually performed in clinical genetics centres. In this context, tests are provided under specific protocols that often include medical supervision, counselling and quality assurance schemes that assess the value of the genetic testing services. Direct-to-consumer (DTC) genetic testing companies operate outside such schemes, as noted by Trent in this issue of the Journal.1 While the uptake of DTC genetic testing has been relatively modest, the number of DTC genetic testing services continues to grow.2 Although the market continues to evolve,3 it seems likely that the DTC genetic testing industry is here to stay.

This reality has led to calls for regulation, with some jurisdictions going so far as to ban public access to genetic tests outside the clinical setting.4,5 In Australia, as Nicol and Hagger observe, the regulatory situation is still ambiguous;6 regulation is further complicated by the activity of internet-accessible companies that lie outside Australia’s jurisdiction. In general, the numerous policy documents that have emanated from governments and scientific and professional organisations cast DTC services in a negative light, seeing more harms than benefits, and, in some jurisdictions, governments have tried to regulate their services and products accordingly.7,8 Policy debates have focused on the possibility that DTC tests could lead to anxiety and inappropriate health decisions due to misinterpretation of the results. But are these concerns justified? Might they be driven by the hype that has surrounded the field of genetics in general. If so, what policy measures are actually needed and appropriate?

Time for a hype-free assessment of the issues?

Driven in part by the scientific excitement associated with the Human Genome Project, high expectations and a degree of popular culture hype have attracted both public research funds and venture capital to support the development of disease risk-prediction tests.3 This hype — which, to be fair, is created by a range of complex social and commercial forces9 — likely contributed to both the initial interest in the clinical potential of genetic testing and the initial concerns about possible harms. Both are tied to the perceived — and largely exaggerated — predictive power of genetic risk information, especially in the context of common diseases. There are numerous ironies to this state of affairs, including the fact that the call for tight regulation of genetic testing services may have been the result, at least in part, of the hype created by the both the research community and the private sector around the utility of genetic technologies.9 This enthusiasm helped to create a perception that genetic information is unique, powerful and highly sensitive, and specifically that, as a result, the genetic testing market warrants careful oversight.

Now that research on both the impact and utility of genetic information is starting to emerge, a more dispassionate assessment can be made about risks and the need for regulation. Are the concerns commonly found in policy reports justified? Where should we direct our policymaking energy?

It may be true that consumers of genetic information — and, for that matter, physicians — have difficulty understanding probabilistic risk information. However, the currently available evidence does not show that the information received from DTC companies causes significant individual harm, such as increased anxiety or worry.10,11 In addition, there is little empirical support for the idea that genetic susceptibility information results in unhealthy behavioural changes (eg, the adoption of a fatalistic attitude).5

The concerns about consumer anxiety and unhealthy behaviour change have driven much of the policy discussion surrounding DTC testing. As such, the research could be interpreted as suggesting that there is no need for regulation or further ethical analysis. This is not the case. We suggest that the emerging research invites us to focus our policy attention on issues that reach beyond the potential harms to the individual adult consumer — where, one could argue, there seems to be little empirical evidence to support the idea that the individual choice to use DTC testing should be curtailed — to consideration of the implications of DTC testing for health systems, children and informed public debate.

Health system costs

Although genetic testing is often promoted as a way of making health care more efficient and effective by enabling personalised medical treatment, it has been suggested that the growth in genetic testing will increase health system costs. A recent survey of 1254 United States physicians reported that 56% believed new genetic tests will increase overall health care spending.12

Will DTC testing exacerbate these health system issues by increasing costs and, perhaps, the incidence of iatrogenic injuries due to unnecessary follow-up? This seems a reasonable concern given that studies have consistently shown that DTC consumers view the provided data as health information that should be brought to a physician for interpretation. One study, for example, found that 87% of the general public would seek more information about test results from their doctor.13 The degree to which these stated intentions translate into actual physician visits is unclear. But for health systems striving to contain costs, even a small increase in use is a potential health policy issue, particularly given the questionable clinical utility of most tests offered by DTC companies. It seems likely that there will be an increase in costs with limited offsetting health benefits — although more research is needed on both these possible outcomes.

Compounding the health system concerns is the fact that few primary care physicians are equipped to respond to inquiries about DTC tests. A recent US study found that only 38% of the surveyed physicians were aware of DTC testing and even fewer (15%) felt prepared to answer questions.14 As Trent notes, even specialists can encounter difficulties in interpreting DTC genetic tests.1 This raises interesting questions about how primary care physicians will react to DTC test results. Will they, for example, order unnecessary follow-up tests or referrals, thus amplifying the concerns about the impact of DTC testing on costs?

Testing of children

While there is currently little evidence of harm caused by DTC genetic testing, most of the research has been done in the context of the adult population. The issues associated with the testing of minors are more complicated, involving children’s individual autonomy and their right to control information about themselves. Many DTC genetic testing companies include tests for adult-onset diseases or carrier status. Testing children for such traits contravenes professional guidelines. Nevertheless, research indicates that only a few DTC companies have addressed this concern. A study of 29 DTC companies found that 13 did not have policies on the issue and eight allowed testing if requested by a parent.15 While it is hard to prevent parents from submitting samples from minors to genetic testing companies, this calls for an important policy debate on whether there are limits on parental rights to access the genetic information of their children. Current paediatric genetic guidelines recommend delaying testing in minors unless it is in their best interests, but these are not enforceable and not actively monitored.16

In addition, unique policy challenges remain with regard to the submission of DNA samples in a DTC setting. It is difficult for DTC companies to check whether the sample received is from the person claiming to be the sample donor. Policymakers should consider strategies, such as sanctions, that eliminate the ordering of tests without the consent of the tested person.

Truth in advertising

The DTC industry is largely based on reaching consumers via the internet. Research has shown that the company websites — which, in many ways, represent the face of the industry — contain a range of untrue or exaggerated claims of value.17 Advertisements for tests that have no or limited clinical value have a higher risk of misleading consumers, because the claims needed to promote these services are likely to be exaggerated. It is no surprise that stopping the dissemination of false or misleading statements about the predictive power of genetics has emerged as one of the most agreed policy priorities.8 While evidence of actual harm caused by this trend is far from robust, it is hard to argue against the development of policies that encourage truth in advertising and the promotion of more informed consumers. Moreover, the claims found on these websites may add to the general misinformation about value and risks associated with genetic information that now permeates popular culture. Taking steps to correct this phenomenon is likely to help public debate and policy deliberations. For example, this might include a coordinated and international push by national consumer protection agencies to ensure that, at a minimum, the information provided by DTC companies is accurate.18

Conclusion

These are not the only social and ethical issues associated with DTC genetic testing. Others, like the use of DTC data for research and the implications of cheap whole genome sequencing, also need to be considered. But they stand as examples of issues worthy of immediate policy attention, regardless of what the evidence says about a lack of harm to individual adult users. We must seek policies that, on the one hand, allow legitimate commercial development in genomics and, on the other, achieve appropriate and evidence-based consumer protection. In finding this balance, we should not be distracted by hype or unsupported assertions of either harm or benefit.

Screening, referral and treatment for depression in patients with coronary heart disease

In 2003, an Expert Working Group of the National Heart Foundation of Australia (NHFA) issued a position statement on the relationship between “stress” and heart disease. They concluded that depression was an important independent risk factor for first and recurrent coronary heart disease (CHD) events.1 Here, we provide an update on evidence obtained since 2003 regarding depression in patients with CHD, and include guidance for health professionals on screening and treatment for depression in these patients. Our statement refers to depression in general (mild, moderate and severe), as all grades of depression have an impact on CHD prognosis. The process for developing this consensus statement is described in Box 1. Treatment decisions should take into account the individual clinical circumstances of each patient.

Epidemiology

The prevalence of depression is high in patients with CHD. Rates of major depressive disorder of around 15% have been reported in patients after myocardial infarction (MI) or coronary artery bypass grafts.3,4 If milder forms of depression are included, a prevalence of greater than 40% has been documented.3,4 Recently, the EUROASPIRE III study investigated 8580 patients after hospitalisation for CHD.5 The proportion of patients with depression, measured by the Hospital Anxiety and Depression Scale, varied from 8.2% to 35.7% in men and 10.3% to 62.5% in women. This is consistent with Australian and New Zealand data from a 6-year study, Long-term Intervention with Pravastatin in Ischaemic Disease (LIPID).6,7 At the end of this trial, 27% of men and 35% of women were identified as depressed, using the Beck Depression Inventory II (BDI-II) questionnaire.

A large systematic review in 2006 suggested that individuals with depression, but no current CHD, have a moderately elevated risk of 1.6 for a later index CHD event.8 This elevated risk was confirmed in the Whitehall II study of 5936 healthy individuals over a 6-year period, in which depression was associated with a hazard ratio of 1.93 for cardiovascular events.9 In the Nurses Health Study, 78 282 healthy women were assessed for depression. In the 6-year follow-up period, 4654 deaths were reported, including 979 deaths from cardiovascular disease.10 Depression was associated with increased all-cause mortality, with an age-adjusted relative risk of 1.76 (95% CI, 1.64–1.89).10 The effect of depression on CHD incidence is thought to be strongest around the time of the depressive episode, with longer-term effects mediated via recurrence of depression.11 In young people the association between depression and CHD may be stronger.12

The case–control INTERHEART Study included 11 119 patients with MI from 52 countries.13 Perceived stress and depression were shown to be important risk factors, which together accounted for 32.5% of the population attributable risk (PAR) for CHD, suggesting that together they were as important as smoking and more important than diabetes (PAR, 9.9%) and hypertension (PAR, 17.9%) as risk factors for CHD.13

For people with CHD and comorbid depression, the relative risk (RR) of death is increased (RR, 1.80 [95% CI, 1.50–2.15]), independent of standard risk factors for secondary prevention.8 Comorbid depression also leads to a higher risk of other adverse outcomes in patients with CHD, such as a lower likelihood of return to work, poorer exercise tolerance, less adherence to therapy, greater disability, poorer quality of life, cognitive decline and earlier dependency.1420 Individuals with CHD and comorbid depression often have less access to interventions for CHD, despite being in a higher-risk group.2123

Definition of depression and types of depression

The diagnosis of depression can be difficult in people with cardiovascular disease, as depressive symptoms such as fatigue and low energy are common to both CHD and heart failure, and may also be a side effect of some drugs used to treat cardiovascular disease, such as β-blockers.24 The diagnosis may be further complicated in such patients by their responses to their disease (and the associated stigma), which may include denial, avoidance, withdrawal and anxiety.

According to the Diagnostic and statistical manual of mental disorders, fourth edition (DSM-IV),25 major depression is diagnosed when there is a minimum of 2 weeks of depressed mood and/or lack of pleasure (anhedonia), accompanied by four or more other (listed) symptoms such as sleep disturbance, appetite disturbance, poor energy, psychomotor impairment or agitation, poor concentration or poor decision making, and suicidal ideas or thoughts of death. The association with CHD appears to increase with greater severity of depressive symptoms across the spectrum, with no discrete cut-off point at “major depression”. Some studies have suggested links between particular subtypes of depression, such as somatic or anhedonic depression, but these are not consistent findings.2628

Screening for depression in patients with coronary heart disease

Screening of a population group for a risk factor or disease is worthwhile when the risk factor or disease has a reasonably high prevalence, there is a robust screening test, and effective and cost-effective treatments are readily available.29,30 Depression is both a risk factor and a disease in its own right, and fulfils these criteria for population screening. Screening for depression in patients with CHD would be expected to produce a higher yield than screening for depression in the general population, owing to a much higher prevalence of depression in patients with CHD. It is important to recognise depression in patients with CHD in order to provide the best possible care. Asymptomatic patients with significant cardiovascular risk factors (eg, those with diabetes) may also be considered for screening, as they have a high risk of depression.31

Many self-reported screening tools exist with the aim of detecting possible depression. These include the Patient Health Questionnaire (PHQ-2, PHQ-9), the Cardiac Depression Scale (CDS), the BDI-I and -II, and the Hospital Anxiety and Depression Scale.32,33 The BDI appears to be the most commonly used tool in studies involving cardiac patients. The CDS was developed by a member of the Expert Working Group (D L H) specifically for patients with cardiac disease.33 The short version (short form) has only five items. There is limited but expanding information on the use of the PHQ-9 in patients with cardiac disease.34,35 It is used widely in primary care. Simple tools such as the Kessler Psychological Distress Scale (K10),36 a measure of general distress, will often overdiagnose depression. This tool is currently used in mental health plans in Australia; however, there is no evidence of its use specifically for patients with CHD.

Recognising the need for a simple screening tool for depression in cardiovascular patients, the 2008 American Heart Association (AHA) Science Advisory suggested the use of the PHQ-2.37 The PHQ-2 is an abbreviated form of the PHQ-9, with only the first two of the nine questions in the PHQ-9 (Box 2).38 There are also other versions of the PHQ-2, which may use shorter time frames. The AHA recommended the use of the PHQ-9 if depression was noted using the PHQ-2.37 The Royal Australian College of General Practitioners’ Guidelines for preventive activities in general practice (the red book) also uses a categorical (Yes/No) version of the PHQ-2.39

The PHQ-2 and the PHQ-9 screening tools are associated with reasonable sensitivity and specificity.34 Importantly, depression diagnosed with the PHQ-2 and the PHQ-9 has been shown to predict worse CHD outcomes. In the Heart and Soul Study, positive responses to either question in the PHQ-2 (Yes/No version) predicted a 55% greater risk of cardiovascular events.35 Furthermore, the validity of the PHQ-2 and the PHQ-9 has been assessed in a variety of patients with varying clinical problems, ages and ethnicities, including in Australian Aboriginal people from urban and rural areas and people from the Torres Strait Islands.4042 Adapted versions exist for use with Indigenous people.

Access to each of the tools varies. No copyright is breached by use of the PHQ-2 and the PHQ-9, and the PHQs and the CDS are free.43 However, some questionnaires such as the BDI-I and -II are subject to copyright and a royalty must be paid each time they are used.44

Implicit in the AHA Science Advisory37 is that screening and identification of patients with depression leads to appropriate treatment, or referral for treatment, by the responsible attending medical practitioner. Unfortunately, research has shown that screening may have little or no impact on the treatment of depression or on outcomes.45,46 Screening by nurses, researchers, receptionists or social workers is not sufficient without appropriate referral or treatment.

It is recommended that a simple tool, such as the PHQ-2 or the short-form CDS, is incorporated into routine screening of patients with CHD. Routine screening for depression is indicated at first presentation, and again at the next follow-up appointment. A follow-up screen should occur 2–3 months after a CHD event. Screening should then be considered on a yearly basis, as for any other major risk factor for CHD. Consideration should also be given to screening the partner or spouse of these patients for depression, as studies show that they are at an increased risk of developing depression.47 If screening is followed by comprehensive care, depression outcomes are likely to be improved.

Treatment of depression in patients with CHD

Collaborative care

Although individual treatment approaches and strategies have been studied, in practice a collaborative-care or stepped-care approach is probably optimal for managing patients with CHD and comorbid depression. The concept of collaborative care involves a group of health professionals working together in a coordinated manner, and this approach has consistently been shown to be associated with greater improvement in depression for patients with CHD compared with standard care, and to be cost-effective.4852 For example, collaborative care after coronary artery bypass grafting improved depression scores, but not physical function or re-hospitalisation rates.48 In patients with depression comorbid with poorly controlled diabetes and/or CHD, the collaborative approach resulted in improvement in depression scores, glycated haemoglobin levels, low-density lipoprotein cholesterol levels, and systolic blood pressure.50

The Coronary Psychosocial Evaluation Studies (COPES) trial52,53 used a stepped-care treatment approach in patients with acute coronary syndromes (ACS) and persistent depression. Depressive symptoms decreased substantially in the intervention (stepped-care) group. Only three (4%) of the intervention patients experienced major adverse cardiac events compared with 10 (13%) of the patients given usual care, suggesting improved cardiovascular outcomes. Moreover, this stepped-care approach was associated with a 43% lower total health cost over the 6-month trial period.54

Pharmacological therapy

The efficacies of fluoxetine,55 sertraline (Sertraline Antidepressant Heart Attack Randomized Trial [SADHART],56 Enhancing Recovery in Coronary Heart Disease Patients [ENRICHD] trial),57 citalopram (Cardiac Randomized Evaluation of Antidepressant and Psychotherapy Efficacy trial [CREATE])58 and mirtazapine (Myocardial Infarction and Depression Intervention Trial [MIND-IT])59 have been evaluated in clinical trials involving patients with CHD.

SADHART studied depression after ACS over 6 months. Depression scores in patients taking sertraline improved significantly more than in those receiving placebo. Most patients were also prescribed aspirin, statins and β-blockers. Life-threatening cardiovascular events occurred less frequently in the sertraline group; however, this result was not statistically significant.56

ENRICHD was a large trial that evaluated the effect of cognitive behaviour therapy (CBT) on depression or low social support in patients with a recent MI. Depression was diagnosed in 74% of participants. CBT improved depression but failed to reduce the number of CHD events. Patients whose depression did not respond to CBT were referred for treatment with antidepressant drugs. Selective serotonin reuptake inhibitors (SSRIs) (mainly sertraline) significantly improved depression in those patients. In the SSRI-treated group, there was a 43% (P < 0.005) reduction in deaths or recurrent MIs.57 However, this was a subset analysis, and therefore is hypothesis generating only.

In the Canadian CREATE trial58 and the MIND-IT trial,59 there were too few CHD events reported to enable analysis of cardiovascular outcomes.

Tricyclic antidepressants may worsen CHD outcomes and should be avoided in patients with CHD. Importantly, tricyclic antidepressants have been associated with increased mortality in patients with CHD.6062 In contrast, a recent meta-analysis of trials involving SSRIs in patients with CHD concluded that this class of drugs was well tolerated, with the risk of adverse events being similar to that for placebo.63

Psychological therapy

Of the various psychological therapies, CBT and integrative therapies (eg, interpersonal psychotherapy) have the best documented efficacy for treatment of major depressive disorder.64,65 CBT was used in the ENRICHD trial,57 interpersonal psychotherapy in the CREATE trial,58 and problem-solving therapy in the COPES trial.52,53 These therapies were all beneficial for depression but did not affect CHD outcomes. The efficacy of psychological therapy as a treatment for major or minor depression was evaluated in patients who underwent coronary artery bypass surgery.66 Significantly more patients in the CBT group (71%) and the stress management group (57%) had low depressive symptom levels, compared with those having usual care (33%), and these results were maintained at 6 months.66

A Cochrane review of psychological interventions for patients with CHD found evidence of small-to-moderate improvements in depression and anxiety symptoms with such interventions, but no strong evidence that the interventions reduced total deaths, risk of revascularisation, or non-fatal infarction.67 The interventions that were less effective were those that aimed to educate patients about cardiac risk factors; those that included client-led discussion and emotional support; or those that included family members in the treatment process.67 Uncertainty remains regarding the subgroups of patients who would benefit most from psychological treatments and the characteristics of successful interventions.

Exercise

Many patients with mild depression respond well to regular exercise and cardiac rehabilitation (exercise-based). A recent Cochrane review of exercise as a treatment for depression concluded that exercise improves depression with a similar efficacy to CBT.68 The benefit of exercise appears to have a dose–response relationship, needing at least 30 minutes of moderate aerobic activity on 5 days per week.69,70 This is consistent with usual public health recommendations.

The benefit of exercise in patients with CHD and depression has been demonstrated in the recent UPBEAT (Understanding the Prognostic Benefits of Exercise and Antidepressant Therapy) trial.71 Patients with CHD with at least a mildly elevated score for depressive symptoms (BDI score > 7) were allocated at random to treatment with SSRIs, exercise or neither. Exercise was equivalent to SSRI treatment in improving depression scores, with patients in both groups showing greater improvement than the control group.71 In a large randomised controlled trial (RCT) of 2322 patients with heart failure (of whom 28% were depressed), exercise, in addition to reducing mortality and hospitalisation (P = 0.03), significantly reduced depression (P = 0.002).72

Complementary and alternative therapies

Up to 50% of patients with depression have been shown to use complementary and alternative medicines without disclosing this to their treating clinician.73 Therapies that may be effective in depression are supplemental marine n-3 fatty acids (eicosapentaenoic acid [EPA] and docosahexaenoic acid [DHA]), S-adenosylmethionine (SAMe) and St John’s wort.74 Specific trials in patients with CHD and depression have not been performed with the latter two therapies.

Marine n-3 fatty acids (at a dose of 1 g per day of combined EPA–DHA) are recommended by the NHFA and the AHA for all patients with CHD.75 This dose also may improve mild depression. However, the addition of 2 g/day of combined EPA and DHA to sertraline 50 mg daily for depressive symptoms appears to provide no added benefit over sertraline alone.76 Some trials comparing St John’s wort and SAMe to antidepressant medications suggest a similar effectiveness in improving depression to antidepressant medications.7779 However, most commercial brands of St John’s wort have not undergone randomised trials.78,79

Adherence

Depression is a major predictor of poor adherence in patients with CHD, be it to drug therapy or lifestyle measures.80,81 Patients with depression are three times more likely to be non-compliant with medical treatment than patients without depression.82 Greater severity and chronicity of depression have been associated with poorer adherence to aspirin therapy after MI.83 Adherence to aspirin therapy after ACS has been shown to be significantly lower in persistently depressed patients (76.1%) than in those whose depression improved (87.4%), or who were not depressed (89.5%).83 Patients who are persistently depressed are also less likely to undertake behaviours that reduce risk; for example, quitting smoking, taking medications, exercising and attending cardiac rehabilitation.81 The SADHART trial showed adherence to medication increased after remission of depression in 68.4% of participants taking the trial medication.84

A recent RCT of a collaborative-care depression treatment program in 134 patients with depression after ACS demonstrated improved adherence to medications and secondary prevention behaviours and was independently associated with improvement in depression.85 However, in another RCT of 157 patients undergoing treatment for depression after ACS, there were no improvements in adherence to risk-reducing behaviours in spite of a significant reduction in depression.86

Referral

Once depression is identified through screening, treatment may be initiated immediately, or referral to psychological or psychiatric services may also be considered appropriate. Most patients with depression in Australia are managed by general practitioners.87

Members of the Cardiac Society of Australia and New Zealand, the majority of whom are clinical cardiologists, were surveyed regarding assessment of depression. Most respondents screened for depression occasionally, with only 3% using a formal tool. Lack of confidence in identifying depression was the strongest predictor of a low screening frequency. Cardiologists rarely initiated treatment for depression, and 43% did not feel they were responsible for treating depression.88

There can be a reluctance to treat depression in patients with CHD because of a belief that depression is normal after an acute cardiovascular event. Mild depression may resolve spontaneously; however, for most individuals with CHD, depression remains long term.89

Conclusion

A summary of the key evidence-based points is provided in Box 3 and Box 4, and the Appendix gives the National Health and Medical Research Council grades of recommendations and evidence hierarchy.

High-quality care for treatment of depression is achievable and affordable. The benefits of treating depression in people with CHD include improved quality of life, improved adherence to other therapies and potentially improved CHD outcomes.90 Effective treatment of depression may decrease CHD events but this is not proven, as no adequately powered trials have been completed, nor are there any ongoing.

1 Process used to develop this National Heart Foundation of Australia consensus statement

The Expert Working Group members performed relevant literature searches using key search phrases including, but not limited to, “stress”, ”depression”, “anxiety”, “treatment of depression”, “acute coronary syndromes”, “adherence and depression” and “screening for depression”. This was complemented by reference lists compiled from reviews and personal collections of the Expert Writing Group members.

Searches were limited to evidence available for human subjects with coronary heart disease published in English up to December 2012. The recommendations made in this consensus statement have been graded according to the National Health and Medical Research Council guidelines (see Appendix).2 The Cardiac Society of Australia and New Zealand, beyondblue: the national depression initiative and the Royal Australian and New Zealand College of Psychiatrists were consulted during the development of this document and have endorsed its content.

2 Patient Health Questionnaire (PHQ-2) Yes/No version35

  • During the past month, have you often been bothered by feeling down, depressed or hopeless?

  • During the past month, have you often been bothered by little interest or pleasure in doing things?

3 National Heart Foundation of Australia grades of recommendation and levels of evidence for screening, referral and treatment for depression in patients with coronary heart disease (CHD)2

Recommendation


Grade2

Level2


1

For patients with CHD, it is reasonable to screen for depression

A

I

2

Treatment of depression in patients with CHD is effective in decreasing depression

A

I

3

Treatment of depression in patients with CHD improves CHD outcomes

D

II

4

Treatment of depression in patients with CHD changes behavioural risk factors/adherence

B

III-2

5

Exercise is an effective treatment of depression in patients with CHD

A

I

6

Exercise improves CHD outcomes in patients with CHD

B

II

7

Psychological interventions improve depression in patients with CHD

B

II

8

Psychological interventions improve CHD outcomes in patients with CHD and depression

D

II

9

SSRIs improve depression in patients with CHD

A

I

10

SSRIs improve CHD outcomes in patients with CHD and depression

D

III-1

11

Collaborative-care approach improves depression in patients with CHD

B

II

12

Collaborative-care approach improves CHD outcomes in patients with CHD and depression

D

II


SSRIs = selective serotonin reuptake inhibitors.

4 Treatment of depression in patients with coronary heart disease (CHD) — summary of treatment subgroup effects showing grade of recommendation and level of evidence

Treatment

Depression


CHD outcome


Grade2

Level2

Grade2

Level2


Non-drug

Exercise

A

I

B

II

Psychological, including CBT

B

II

D

II

St John’s wort*

D

—*

D

—*

n-3 fatty acids

D

II

D

II

SAMe*

D

—*

D

—*

Collaborative

B

II

D

II

Drug

SSRIs

A

I

D

III-1


CBT = cognitive behaviour therapy. SAMe = S-adenosylmethionine. SSRIs = selective serotonin reuptake inhibitors. * Insufficient evidence to rate or no trials have been performed. Data not available in patients with CHD.

Appendix: Definition of National Health and Medical Research Council (NHMRC) grades of recommendations and evidence hierarchy*

Definition of NHMRC grades of recommendations

Grade

Description


A

Body of evidence can be trusted to guide practice

B

Body of evidence can be trusted to guide practice in most situations

C

Body of evidence provides some support for recommendation(s) but care should be taken in its application

D

Body of evidence is weak and recommendation must be applied with caution

NHMRC evidence hierarchy: designation of levels of evidence


Level

Intervention

I

A systematic review of level II studies

II

A randomised controlled trial

III-1

A pseudorandomised controlled trial (ie, alternate allocation or some other method)

III-2

A comparative study with concurrent controls:

  • Non-randomised, experimental trial

  • Cohort study

  • Case—control study

  • Interrupted time series with a control group

III-3

A comparative study without concurrent controls:

  • Historical control study

  • Two or more single arm study

  • Interrupted time series without a parallel control group

IV

Case series with either post-test or pre-test/post-test outcomes


* From NHMRC additional levels of evidence and grades for recommendations for developers of guidelines.2

A cluster randomised controlled trial of vascular risk factor management in general practice

To the Editor: Harris and colleagues report failure to obtain reduction in several important risk factor-based intermediate outcomes for vascular disease from their lifestyle intervention in the Health Improvement and Prevention Study (HIPS).1 Using intention-to-treat analysis, if only 117 of 384 participants completed at least two of six group sessions, a positive result could not be expected. We know that interventions for prevention of cardiovascular disease (CVD) and diabetes can be run successfully in Australian primary care, which raises questions about the design of Harris et al’s intervention.

In 2007, the Council of Australian Governments identified an effective Australian primary care-based diabetes and CVD risk prevention program — the Greater Green Triangle Diabetes Prevention Program2 — which has been shown to change dietary behaviour via an effective theoretical model.3

The Counterweight program used in Harris et al’s intervention — which is based on the obsolete Transtheoretical (Stages of Change) Model (TTM)4 — was aimed at weight reduction, not CVD risk. The measures included fruit and vegetable consumption, not CVD-specific measures such as saturated : polyunsaturated fat ratio and salt intake. The effect of dietary changes could not be measured, so feedback on this could not be provided to participants, which further weakens the intervention.

The aims of the intervention, training, methods of empowerment and tools are unclear. If the training was based on TTM, this might explain the negative results, particularly in light of a number of trials which have shown that TTM does not work or explain behavioural changes.

Further trials are not needed because full-scale implementation of an intervention based on the Greater Green Triangle Diabetes Prevention Program — the Department of Health-funded program Life! Helping You Prevent Diabetes, Heart Disease and Stroke — has already occurred in Victoria.5 General practitioners should not lose confidence in lifestyle modification for prevention of CVD and diabetes.