×

Supporting rural health care

Overcoming the barriers and seizing the opportunities to provide more equitable health care for Australia’s rural population

Australia’s rural population, which comprises a third of our total population, presents distinct challenges for health care delivery. The low population density of much of rural Australia, the great distances involved, and the limited number of larger centres offering high-level medical care make equitable health care delivery difficult to achieve. Added to these difficulties is the overrepresentation of socioeconomic disadvantage and Indigenous people in rural and remote communities. The overall picture is that of a population with a heavier disease burden, more barriers to accessing appropriate care, and poorer outcomes from cardiovascular disease, stroke, diabetes and cancer than the metropolitan population.

The rural health workforce is ageing and, particularly in remote areas, remains heavily reliant on international medical graduates. There is also a significant maldistribution of the medical workforce, with the rural sector having only a third to half of the workforce of metropolitan areas, on a population basis. Surgical and medical specialists are most markedly underrepresented.1

Certain medical interventions will always be difficult to deliver to some rural populations. For example, primary percutaneous coronary intervention (PCI) for ST-segment-elevation myocardial infarction and thrombolysis for acute stroke, while increasingly available in regional referral hospitals, rely on timely delivery to be effective, and this will not always be possible for people in more remote locations. The SNAPSHOT ACS study reported in this issue of the Journal indicates that not only are patients in rural areas less likely than those in urban areas to undergo echocardiography, coronary angiography and PCI, they are also less likely to receive guideline-recommended medications, cardiac rehabilitation and dietary advice.2 This finding is consistent with shortages of allied health workers and specialists in rural and remote settings.

Nevertheless, there have been recent initiatives that will benefit rural health in the long term. The Rural Clinical School program, operating for over a decade, is now reaping rewards, with graduates returning to rural areas as junior medical officers, general practitioners or specialists.3 Health Workforce Australia and the Royal Australasian College of Physicians (RACP) are supporting dual training in general medicine and a subspecialty, with two positions to be based in Orange and Dubbo in New South Wales next year.4 This initiative addresses the lack of rural advanced training positions — a key concern in the April 2013 report of the Mason review.5 Given that the inability to undertake a major portion of advanced training in a chosen specialty in a rural location is a significant barrier to rural recruitment, such training positions require funding. Queensland Health’s Rural Generalist Pathway and the RACP’s dual training model are two examples of programs that could be expanded both internally and into other specialties.

Opportunities to strengthen medical training — both before and after medical school graduation — should be seized. The Rural Clinical School program’s expansion beyond medical training into interdisciplinary education and simulation centres prepares trainees well for rural practice. However, this approach is challenged by insufficient rural intern positions. State governments should consider increasing the number of rural intern positions available.

Unless the complexity associated with the cost of providing health services to sparsely populated and geographically stretched communities is fully appreciated, activity-based funding will pose risks to rural service provision. Patient transport is often a limiting factor in appropriate and timely service provision, and this also needs to be reviewed.

The federal government is well aware of the issues in rural and Indigenous health, and several of its initiatives, such as improving rural training opportunities, need to be acknowledged. It is important that the medical profession continues to work with both the federal government and rural communities to support, promote and strengthen these initiatives. It is no longer realistic to expect one or two doctors in small rural communities to provide 24-hour care every day of the year. Instead, best practice requires groups of practitioners with complementary skills to work together to achieve appropriate, and safer, health care. An important function for politicians is to explain the necessity of these changes in medical practice to the community. Nurse practitioners could have a major role, and expansion of these positions in rural group practices and multipurpose services would facilitate better care, especially for patients with chronic illness.

Australia has made steady progress in recent years in providing effective health care to rural and regional communities. Future governments require strength of resolve, clear vision and sound policies to maintain that progress for the third of the Australian population who live beyond the major cities.

Vitamin B12 and folate tests: the ongoing need to determine appropriate use and public funding

It’s not as simple as new for old: we need to follow a process for “disinvestment” in existing medical procedures, services and technologies

Criteria have been developed for assessing the safety, effectiveness and cost-effectiveness of new and emerging health interventions, but additional challenges exist in identifying opportunities for reducing the use of existing health technologies
or procedures that are potentially overused,
(cost-)ineffective or unsafe.1 Criteria have been proposed to flag technologies that might warrant further investigation under quality improvement programs.1 These criteria are: new evidence becomes available; there is geographical variation in use; variation in care between providers is present; the technology has evolved and differs markedly from the original; there exists a temporal trend in the volume of use; public interest or controversy is present; consultation with health care workers and funders raises concerns; new technology has displaced old technology; there is evidence of leakage (use beyond the restriction or indication); the technology or intervention is a “legacy item” that has never been assessed for cost-effectiveness; use is not in accordance with clinical guidelines; or the technology is nominated by clinical groups.

After such a nomination was made by members of the clinical laboratory community regarding vitamin B12 and folate tests, we sought to determine whether these tests met other criteria. We hope that this article will encourage debate and discussion about the appropriate use of these tests.

Testing for vitamin B12 and folate deficiency

Diagnosing vitamin B12 and folate deficiencies is difficult. The symptoms are diverse (such as malaise, fatigue and neurological symptoms), as are the signs (including megaloblastic anaemia and cognitive impairments). Defining target conditions is, therefore, also difficult. Tests include a full blood count and blood film examination, serum B12, serum folate and red-cell folate (RCF) assays, as well as examination of metabolic markers such as methylmalonic acid (MMA) and homocysteine (Hcy). Untreated vitamin B12 deficiencies may cause serious health problems, including permanent neurological damage (which may occur with low serum B12 levels without haematological changes). Maternal folate deficiencies have been associated with neural tube defects in infants. Potential vitamin B12 or folate deficiencies therefore need to be appropriately investigated and managed.

New evidence

The utility of a diagnostic test is influenced in part by its precision (the ability of a test to faithfully reproduce its own result) and its diagnostic accuracy (ability to discriminate between a patient with a target condition and a healthy patient). Evidence suggests serum B12 tests have poor discriminative ability in many situations, and debate is ongoing over which folate assay is most useful.

The only systematic review and meta-analysis of the diagnostic accuracy of serum B12 tests (conducted by members of our group) suggested that these tests often misclassify individuals as either B12 deficient or B12 replete.2 These findings are consistent with other reports in the literature. One recent report states:

Both false negative and false positive values are common (occurring in up to 50% of tests) with the use of the laboratory-reported lower limit of the normal range as a cutoff point for deficiency.3

And further:

There is often poor agreement when samples are assayed by different laboratories or with the use of different methods.3

Widespread CBLA (competitive-binding luminescence assay) malfunction has also been noted, with assay failure rates of 22% to 35%4 (interference due to intrinsic factor antibodies may explain some of this variation). While a critical overview has suggested that “falsely normal cobalamin concentrations are infrequent in patients with clinically expressed deficiency”, the author notes challenges in diagnosing subclinical deficiency5 (mild metabolic abnormalities without clinical signs or symptoms). Assessment of this evidence base is complicated by the lack of a universally accepted gold standard and by target conditions that are difficult to define, variable clinical presentations and variable cut-off values used to define deficiency.

For investigating folate status, RCF assays are thought to be less susceptible to short-term dietary intake than are assays for serum folate. However, it has been reported that:

The red cell folate assay is more complex to perform than the serum folate assay and requires more steps in sample handling before analysis, and this may be one of the reasons why the precision of the red cell folate assay is less than that of the serum folate assay.6

As discussion continues over which folate test is preferable, new evidence related to the prevalence of folate deficiencies in countries with mandatory food fortification has shifted the focus toward whether there is a need to perform any folate investigations in these jurisdictions. In Australia, mandatory fortification of wheat flour with folic acid was introduced in September 2009.7 Prevalence estimates from a sample of inpatients and outpatients suggested that folate deficiency stood at 0.5% in April 2010, showing an 85% reduction in absolute numbers since April 2009.7 While there is currently no evidence to suggest that the prevalence of megaloblastic anaemia caused by folate deficiency has been reduced, the low frequency of low serum RCF test results in countries where there is mandatory fortification of grain products with folic acid supports the perspective that “there is no longer any justification in ordering folate assays to evaluate the folate status of the patients”.8

Technology development

Over time, multiple technologies for analysing vitamin B12 status have become available, including assays for measuring holotranscobalamin (holoTC, the bioavailable form of vitamin B12), as well as metabolic markers such as MMA and Hcy.3,5 However, like all tests, these are imperfect: holoTC is expensive, not routinely available, itself reliant on poorly defined serum B12 reference ranges, and is yet to be confirmed as a superior test to the serum B12 assay.5 Hcy measurement is subject to artefactual increases due to collection practices, and reference ranges are variable. The availability of MMA tests is restricted to some clinical and research laboratories. As a result, the optimal procedure for measuring vitamin B12 is unclear. As noted above, while a number of approaches exist for assessing folate status, there is currently no consensus on the most appropriate laboratory investigation process.

Temporal, geographical and provider variations

Australian Medicare utilisation data have shown substantial growth in the use of item 66602, which relates to the combined use of serum B12 and folate tests. Between the financial years 2000–01 and 2009–10, use increased from 1082 services per 100 000 population to 7243 services per 100 000 population (21.78% average annual growth rate).9 Over the same period, spending on pathology services overall grew at an average annual rate of 6.3%.

Geographical variation was also present, with the number of services reimbursed for item 66602 ranging from 1962 per 100 000 population in the Northern Territory to 8658 per 100 000 population in the Australian Capital Territory in 2009–10.9 While some of this variation may be due to demographic differences and populations known to have access to fewer health services (eg, Indigenous Australians), the substantial temporal and geographical differences in use raise more questions about appropriate use of these tests, and whether or not they are underused or overused.

Guidelines

Guidelines related to the use of vitamin B12 and folate tests vary widely in their recommendations. While some recommend B12 and folate tests as screening tools in commonly encountered illnesses such as dementia, others suggest restricting testing to patients who have already undergone pretest investigations (such as full blood examinations; however, we note that neurological damage may occur in patients with low serum B12 levels and without haematological changes).10,11 Guidelines may differ on key recommendations, such as the preferred first-line investigation for establishing folate status, while some question the utility of folate investigations at all in jurisdictions where food is fortified with folate.1214

Leakage

With wide variability in guideline recommendations, and with few appearing to consider the diagnostic accuracy of B12 or folate tests, determining the extent to which services have “leaked” beyond their clinical indications is difficult. Possible leakage is evidenced by the use of serum B12 tests among patients presenting with weakness and tiredness, which is not supported by any available guidelines.15 A large study of general practitioners indicated that between April 2000 – March 2002 and April 2006 – March 2008 their use of serum B12 tests among patients presenting with weakness and tiredness increased by 105%.15

Discussion

Tests for investigating patients’ vitamin B12 and folate status have become widely used in clinical practice. Yet existing evidence suggests that the diagnostic accuracy of serum B12 tests is difficult to determine and may be highly variable. While other tests are available for investigating suspected B12 and folate deficiency (such as holoTC, MMA and Hcy), the diagnostic accuracy of these tests is also contested. Challenges in examining the diagnostic accuracy of serum B12 tests include highly variable clinical presentations, lack of a gold standard and inconsistent cut-off values used to define deficiency. While it remains under debate whether the serum or red-cell folate test is most useful for investigating folate status, mandatory folate fortification in Australia may call into question any use of these tests.

Temporal variation in use and geographical differences in how these tests are employed are both evident in Australian data. Moreover, available clinical guidelines are highly inconsistent in their recommendations. Collectively, the issues of test accuracy, wide variability in test use, and inconsistent guideline recommendations suggest that the use of vitamin B12 and folate tests is an area with much scope for quality improvement.

To improve the use of these tests, further assessment is needed that examines the complexity associated with clinical decision making and the various factors influencing why doctors request these tests. The decision to request an investigation such as a B12 or folate test may be driven by a range of factors, including ease of use, cost, absence of significant patient risk, the perceived need to respond to patient requests, lack of appreciation of the diagnostic accuracy of the tests, or ready availability of results.16 Understanding how these factors influence the use of B12 and folate tests may best be achieved through direct consultation with general practitioners, pathologists, specialists and consumers and is a critical step in advancing the assessment of these tests.

Sevenfold rise in likelihood of pertussis test requests in a stable set of Australian general practice encounters, 2000–2011

Pertussis, commonly known as whooping cough, is caused by the small, gram-negative coccobacillus Bordetella pertussis. Classic whooping cough illness is characterised by intense paroxysmal coughing followed by an inspiratory “whoop”, especially in young children or those without prior immunity, followed by a protracted cough.1,2 It is now more widely understood that these characteristic symptoms are not always present during B. pertussis infection, and that individuals may only have symptoms similar to those of the common cold or a non-specific upper respiratory tract infection.2

In recent years, rates of pertussis notifications have increased dramatically across Australia and in many other parts of the world.36 The rise has been seen in all Australian states and territories, with the highest notification rates in children aged under 15 years.7 Although increased notifications may be due to a true increase in circulating B. pertussis, it is possible that the magnitude of the increase has been amplified by better recognition of disease and more frequent testing.8

Historically, the diagnostic gold standard for pertussis laboratory testing was bacterial culture from nasopharyngeal secretions during the early phases of infection (Weeks 1 and 2), and serological testing was used as an alternative diagnostic method during later phases of infection.2 However, even with ideal specimen collection, transport and handling, culture has low sensitivity and does not provide timely results. Although serological testing is more sensitive, sensitivity and specificity may be lowered depending on the timing of specimen collection and the patient’s infection and vaccination history.9 Polymerase chain reaction (PCR) testing has emerged as a key diagnostic method, and respiratory specimens are now commonly tested for pertussis using PCR in Australia and other countries.2,4,10 PCR testing provides more sensitive and rapid results than culture and serological testing. Also, PCR allows less invasive specimen collection — especially useful in younger age groups, in whom infection rates are high and serum collection may be challenging.1

The key datasets used to monitor pertussis incidence and epidemiology in Australia — pertussis notifications, and pertussis-coded hospitalisations and deaths — are populated by positive test results from laboratories and, as such, are not independent of changes in testing practices. Without negative test results or other denominator data to assess changes in testing behaviour, it is difficult to distinguish changes in recorded disease incidence that are due to the effect of increased testing from any true increase in disease.

To better understand the role testing behaviour has on current pertussis epidemiology in Australia, we investigated pertussis testing trends in a stable set of general practice encounters. We hypothesised that the likelihood of pertussis testing, in a stable set of encounters that were most likely to result in a pertussis test request, has increased over time and that this may have led to amplification of laboratory-confirmed pertussis identification in Australia.

Methods

We analysed data from the Bettering the Evaluation and Care of Health (BEACH) program and the National Notifiable Diseases Surveillance System (NNDSS).

The BEACH program is a continuous cross-sectional national study that began collecting details of Australian general practice encounters in April 1998. Study methods for the BEACH program have been described elsewhere11 and are summarised in Appendix 1.

Initially, all encounters for which a pertussis test (ICPC-2 [International Classification of Primary Care, Version 2] PLUS code R33007 [ICPC-2 PLUS label: Test;pertussis]12) was ordered in the period April 2009 – March 2011 were identified and examined. During this period, 30 BEACH problems resulted in a pertussis test request at some time, and nine problems accounted for 90.9% of all pertussis test requests in the dataset. Four other problems, for which a pertussis test request was made at more than 1% of general practice management occasions of that problem, were added (Appendix 2). The 13 selected problems accounted for 92.3% of pertussis tests ordered between April 2009 and March 2011. These were labelled “pertussis-related problems” (PRPs) and data for these problems at encounters recorded between April 2000 and March 2011 were extracted.

BEACH data were grouped into two pre-epidemic periods (before the start of the national pertussis outbreak in 2008) and three epidemic years. During the pre-epidemic periods (April 2000 – March 2004 and April 2004 – March 2008), testing proportions were constant. For each pre-epidemic period and epidemic year, the proportion of PRPs with a pertussis test ordered and the proportion of BEACH problems that were PRPs were calculated. The proportions of PRPs with a pertussis test ordered were grouped into clinically meaningful age groups: 0–4 years, 5–9 years, 10–19 years, 20–39 years, 40–59 years, and ≥ 60 years.

The NNDSS collates notifications of confirmed and probable pertussis cases received in each state and territory of Australia under appropriate public health legislation.13 Notified cases meet a pertussis case definition, which requires: laboratory definitive evidence; or laboratory suggestive evidence and clinical evidence; or clinical evidence and epidemiological evidence (Appendix 3).

To match the BEACH years, all Australian pertussis notifications between April 2000 and March 2011 were extracted from the NNDSS database, including data on age and laboratory testing method. Pertussis notifications were aggregated by month and year, by age group, and by laboratory test method (serological, PCR, culture or unknown). Notifications that had more than one testing modality reported were classified into a single test category using the following hierarchy: culture, PCR, serological, unknown. A total of 1318 notifications were coded only as “antigen detection”, “histopathology”, “microscopy”, “not done” or “other” (epidemiologically linked cases); these were excluded from the analysis as they accounted for only 0.9% of notifications over the study period. The rates of pertussis notifications per 100 000 population were then calculated for each pre-epidemic period and epidemic year.

Temporal changes in the proportions of PRPs with a pertussis test ordered and the rates of pertussis notifications were assessed with a non-parametric test for trend over the whole study period and by calculating odds ratios with robust 95% confidence intervals. Correlation coefficients were calculated to determine the relationship between BEACH and NNDSS datasets. The BEACH analyses incorporated an adjustment for the cluster sample design. Initial BEACH analyses were performed using SAS version 9.1.3 (SAS Institute). Subsequent BEACH and NNDSS analyses were performed using Microsoft Excel and Stata version 11 (StataCorp).

During the study period, the BEACH program had ethics approval from the University of Sydney Human Research Ethics Committee and the Australian Institute of Health and Welfare Ethics Committee. This particular study was approved by the University of Queensland Medical Research Ethics Committee.

Results

The PRPs captured an average of 90.7% of all BEACH problems with a pertussis test ordered each year (range, 87.7%–92.7%) between April 2000 and March 2011 (Box 1). During the study period, PRPs as a proportion of all BEACH problems remained stable, with an annual average of 8.0% (range, 7.7%–8.7%).

When the BEACH data were grouped into pre-epidemic periods and epidemic years, the proportion of PRPs with a pertussis test ordered increased about sevenfold — from 0.25% to 1.71% — when comparing April 2000 – March 2004 and April 2010 – March 2011(Box 2, Box 3). This increase was highly correlated with NNDSS pertussis notification rates (correlation coefficient [r], 0.99), which increased about fivefold during the same period (Box 3). A highly significant trend was detected for changes in BEACH pertussis test requests (P < 0.001) and NNDSS notification rates (P < 0.001) from April 2000 to March 2011.

In the age-specific analysis, there were significant increases in laboratory-confirmed pertussis notifications and in the likelihood of pertussis test requests across all age groups during the study period (Box 3). When comparing April 2000 – March 2004 and April 2010 – March 2011 pertussis testing rates, the largest increase was in 5–9-year-olds (odds ratio, 11.6; 95% CI, 4.2–36.7), followed by 0–4-year-olds, 40–59-year-olds and ≥ 60-year-olds. With the exception of 5–9-year-olds, the increase in pertussis testing exceeded notification changes in all age groups.

Numbers of NNDSS pertussis notifications fluctuated over the study period (Box 4). From 2008 onwards, there was a clear increase in the numbers of PCR-confirmed notifications. Before April 2008, most NNDSS notifications were confirmed by serological testing (66.0%–80.7% of all notifications annually). The proportion of notifications confirmed by PCR increased from 16.3% in April 2000 – March 2004 to 65.3% in April 2010 – March 2011 (Box 1). The proportion of notifications confirmed by culture did not change, and accounted for an average of 2.0% of all notifications over the study period.

Discussion

Consistent with experience in other developed countries,4,1416 we found that rates of pertussis notifications in Australia increased dramatically in recent years, from an average annual rate of 34 per 100 000 population between April 2000 and March 2004 to 158 per 100 000 population between April 2010 and March 2011. Our results cast some light on the potential role that the increasing likelihood of a pertussis test request may have on this change.

Using BEACH data, we found that individuals presenting to an Australian general practitioner between April 2010 and March 2011 were seven times more likely to have been tested for pertussis than 10 years earlier. This finding was within a set of general practice problems that remained stable as a proportion of all problems. This increased likelihood of pertussis testing, most evident in the epidemic years from April 2008 onwards, reached a maximum in the period April 2010 – March 2011, when pertussis tests were ordered in 1.71% of PRPs. A particular strength of our findings is that we used a data source that does not rely on laboratory testing, unlike most other datasets used to monitor pertussis in Australia and elsewhere.4,7,10,13

The increased likelihood of testing in general practice coincided with an increasing proportion of NNDSS pertussis notifications being confirmed by PCR, from 16.3% between April 2000 and March 2004 to 65.3% between April 2010 and March 2011. A review of pertussis cases in New South Wales during the period 2008–2009 also showed a shift away from serological testing (the predominant method before 2008) to PCR testing from 2008 onwards.10

Pertussis notification rates and the likelihood of testing varied considerably across age groups. There was a dramatic increase in notification rates in 0–4-year-olds and 5–9-year-olds from the 2004–2008 pre-epidemic period to the 2008–2009 epidemic year, compared with a moderate increase in testing, which indicates that there probably was a true increase in disease for these groups during this period. It is possible that a real increase in 0–4-year-olds and 5–9-year-olds early on prompted increased disease awareness among GPs, leading to widespread increases in testing across all ages. A positive feedback loop due to increased testing — leading to increased disease detection and awareness, leading to increased testing, and so on — may have been established. This interpretation is supported by the observation that although testing continued to increase from the epidemic year 2008–2009 to the epidemic year 2009–2010, there was little change in notifications, suggesting an increase in testing during that period rather than an increase in disease. In the other age groups, an increase in testing appeared to be responsible for magnified pertussis notifications. A study of pertussis resurgence in Toronto, Canada, also described this phenomenon and concluded that, although there had been true increase in local disease activity, the apparent size of the increase had been magnified by an increase in the use of pertussis testing and improvements in test sensitivity.4

In Australia, public funding for pathology laboratories to use PCR to test specimens for pertussis (and other pathogens) commenced under the Medicare Benefits Schedule (MBS) in November 2005.17 This specific reimbursement for PCR testing (MBS item 69494) — a Medicare fee of $28.65 compared with $22.00 for culture (MBS item 69303) and $15.65 for serological testing (MBS item 69384)17 — may have been an incentive for laboratories across Australia to implement PCR testing more routinely. In addition, during the 2009 H1N1 influenza pandemic, public funding was allocated for the purchase of laboratory equipment (notably PCR suites), but much of the funding was not received by laboratories until after the demand for pandemic influenza testing had subsided.18 New PCR capacity may have provided an increased opportunity for laboratories to conduct PCR testing for other pathogens, such as B. pertussis.

Several factors may have contributed to an increase in disease during the study period. Pertussis laboratory testing methods have been documented to vary between children and adults. While, historically, culture would have been preferred to serological tests for the very young,19 children now are more likely to be tested using PCR, and adults are predominantly tested using serological tests.10 The variation in testing and notification rates across age groups may be due to differences in susceptibility and immunity.20 Pertussis vaccination does not provide lifelong immunity against infection, with protection waning between booster doses.21 Waning immunity may partially explain differences in pertussis incidence between age groups, with older individuals having lower immunity due to longer periods since vaccination.20 Furthermore, there is evidence to suggest that whole-cell pertussis vaccine formulations used in Australia and overseas before the late 1990s were more protective against B. pertussis infection than currently used acellular pertussis vaccines,14,2224 resulting in immunity levels waning faster in some age cohorts due to changes in vaccination schedules.25 In addition, a recent analysis of B. pertussis isolates collected in Australia between 2008 and 2010 indicates that there has been increasing circulation of vaccine-mismatched strains, hypothesised to be due to the selective pressure of vaccine-induced immunity.26

While these or other factors may have led to a true increase in disease during the study period, our data suggest that increased testing, most likely due to expanding use of PCR during the study period, has almost certainly amplified the magnitude of notified pertussis activity in Australia. This increase in testing might have led to identification of illness that would have otherwise gone undetected among age groups in which pertussis circulates widely or age groups in which pertussis had previously been largely left as a clinical diagnosis.

Our findings have global implications, particularly for countries with high or expanding PCR availability. They highlight the critical importance of analysing changes in infectious diseases using a range of surveillance systems. By monitoring changes in laboratory testing and using surveillance datasets that do not rely on laboratory test results, it is possible to determine whether increases in notifications for diseases such as pertussis are due to a true increase in disease, an increase in testing, or a combination of both.

1 PRPs as a proportion of all BEACH problems with a pertussis test ordered and as a proportion of all BEACH problems, and NNDSS PCR tests as a proportion of all NNDSS pertussis notifications, April 2000 to March 2011

Period
(April – March)

PRPs as a proportion of all BEACH problems with a pertussis test ordered
(total no. of pertussis tests)

PRPs as a proportion
of all BEACH problems
(total no. of PRPs)

NNDSS PCR tests as a proportion of all NNDSS pertussis notifications (total
no. of pertussis notifications)


2000–2004

89.4% (141)

8.7% (51 396)

16.3% (16 983)

2004–2008

92.1% (216)

7.9% (45 872)

11.3% (31 559)

2008–2009

87.7% (114)

7.9% (12 551)

55.4% (17 945)

2009–2010

92.7% (164)

7.8% (12 228)

55.8% (22 754)

2010–2011

91.7% (216)

7.7% (11 557)

65.3% (33 641)

PRP = pertussis-related problem. BEACH = Bettering the Evaluation and Care of Health. NNDSS = National Notifiable Diseases Surveillance System. PCR = polymerase chain reaction.

2 Proportions of BEACH PRPs with a pertussis test ordered, and NNDSS pertussis notification rates, April 2000 to March 2011


BEACH = Bettering the Evaluation and Care of Health. PRP = pertussis-related problem.
NNDSS = National Notifiable Diseases Surveillance System. * Data for 2000–2004 and 2004–2008 are averaged annual rates, and data for 2008–2009, 2009–2010 and 2010–2011 are annual rates.

3 Proportions of BEACH PRPs with a pertussis test ordered, and NNDSS pertussis notification rates per 100 000 population, by age group, April 2000 to March 2011

Period (April – March)


Odds ratio
(95% CI)

Correlation coefficient (r)§

Pre-epidemic period*


Epidemic year


Age group

Dataset

2000–2004

2004–2008

2008–2009

2009–2010

2010–2011


0–4 years

BEACH

0.16%

0.12%

0.48%

0.89%

1.31%

8.0 (3.9–17.2)

0.89

NNDSS

44.97

35.78

244.23

225.60

299.17

4.7 (4.3–5.2)

5–9 years

BEACH

0.16%

0.22%

0.78%

2.61%

1.87%

11.6 (4.2–36.7)

0.75

NNDSS

29.68

17.55

202.17

260.06

507.62

14.2 (12.8–15.7)

10–19 years

BEACH

0.36%

0.36%

1.27%

1.95%

2.05%

5.7 (2.8–11.4)

0.88

NNDSS

82.29

38.24

126.87

134.15

226.45

2.2 (2.1–2.3)

20–39 years

BEACH

0.33%

0.54%

0.92%

1.10%

1.76%

5.4 (3.5–8.5)

0.98

NNDSS

22.58

34.51

64.41

84.42

105.60

2.8 (2.6–3.0)

40–59 years

BEACH

0.25%

0.65%

1.05%

1.50%

2.09%

8.5 (5.2–14.0)

0.99

NNDSS

29.17

54.60

82.12

113.33

153.61

3.2 (3.0–3.4)

≥ 60 years

BEACH

0.19%

0.41%

0.50%

0.54%

1.43%

7.6 (4.3–13.7)

0.90

NNDSS

16.12

50.15

76.57

103.49

142.84

5.6 (5.1–6.2)

All ages

BEACH

0.25%

0.43%

0.80%

1.24%

1.71%

7.0 (5.5–8.8)

0.99

NNDSS

33.73

42.31

88.86

108.56

158.42

3.2 (3.2–3.3)


BEACH = Bettering the Evaluation and Care of Health. PRP = pertussis-related problem. NNDSS = National Notifiable Diseases Surveillance System. * NNDSS data are average notifications per 100 000 population per year. NNDSS data are notifications per 100 000 population per year. Comparison
of 2000–2004 and 2010–2011 data. § Correlation between BEACH and NNDSS data.

4 NNDSS pertussis notifications by laboratory test method, April 2000 to March 2011


NNDSS = National Notifiable Diseases Surveillance System. PCR = polymerase chain reaction.

Take a deep breath . . . and talk

To the Editor: We congratulate Nowak on her discussion surrounding communication in the emergency department (ED).1 This is certainly an issue that has an impact on both patient safety and satisfaction, and is an area in which all medical personnel can improve. The challenges in ED are manifold, and time-based targets tend to militate against effective communication.2 A strategy that we have employed in our department is to involve the patient and their family in discussions that are relevant to them — in particular, the ward round. The first step in the round is to introduce the participating doctors, nurses and students to the patient, explain why it is being done, and let the patient listen to the presentation, which, after all, is about them and should not be a secret.

Once the presentation is finished, the patient is invited to add to or challenge anything that has been said. This is listened to and responded to in the same manner as any of the participants’ contributions to the process.

We have found that not only are patients, doctors and nurses more satisfied with this process, but we often find out extra information, particularly pertaining to the patient’s history and their stage in the journey through the ED.

This small change, which usually adds less than a minute to each patient encounter on the round, is well worth considering not only in the ED but also in other ward-based environments.

Where is the next generation of medical educators?

To the Editor: The thought-provoking editorial by Hu and colleagues laments the “erratic supply of medical educators”.1 Curriculum design and review, course accreditation, and student teaching and assessment at all levels require specialist expertise in education. The 17th National Prevocational Medical Education Forum, held in Perth in November 2012, revealed pressures throughout the medical education and training pipeline.

With an unprecedented increase in student numbers in response to population growth and increasing demand for doctors, particularly in general medicine and rural practice, the requirement for more medical educators is critical. Hu et al’s editorial is timely in identifying this need and in recognising medical education as an evolving specialist discipline within the medical profession. However, much of the clinical teaching in our medical schools still depends on a motivated minority of senior specialists in clinical practice who are drawn from a much larger pool of potential educators.

In Western Australia, we face significant population growth and a current shortage of up to 1000 general practitioners2 as we prepare for the establishment of the state’s third medical school at Curtin University. One hundred new domestic medical students are anticipated to be in their clinical placement years by 2019. We applaud Hu and colleagues for drawing attention to the persistent shortage of specialist medical educators, and acknowledge our ongoing debt to members of the profession who find satisfaction in the Hippocratic tradition.

Philately and the Diagnostic and statistical manual of mental disorders

To the Editor: I was interested in the recent articles from the “Stamps of greatness” series, reprinted from past issues of the AMA Gazette, containing the memorialisation in stamps of famous historical leaders in medicine. Few readers would be likely to realise the association between the hobby of philately (the study and collection of postage stamps) itself and the development of the Diagnostic and statistical manual of mental disorders (DSM) classification of psychiatric illness.1

As an activity in itself, the hobby of philately is likely to have significant mental health benefits, such as enhancing organisational ability in a relaxed environment, as well as allowing the collector to enjoy the design and art of the stamps, and to engage intellectually with the value, geography and historical context of the stamp. However, the way in which philately applied to the development of the DSM is one of those chance moments that affect the subsequent course of history.

William Menninger, a member of a prominent United States family of psychiatrists from Kansas, was appointed to the 4th Service Command of the US Army during the Second World War after the sudden death of the head of the neuropsychiatric division. Menninger commented that on his appointment, his commanding general, Major General Henning, had little use for psychiatrists. However, when Menninger made his first formal call on Major General Henning, he found him working on his stamp collection and was able to develop immediate rapport with him because of a common interest in philately.

Menninger also displayed other talents, such as playing the piano and exhibiting an appropriate sense of humour in gatherings of senior military officers, which further enhanced his acceptance within the staff of the US Surgeon General and increased his influence, leading to his subsequent promotion to Brigadier General. Menninger used his position to add a substantial number of psychiatrists to the Surgeon General’s division.2

The acceptance that Menninger developed within the division also led him to chair a committee that produced the War department technical bulletin, medical 203 in 1946, a document that classified mental illness in a detailed fashion for the first time in the US. Menninger’s interest in psychological factors related to mental illness, rather than the more narrow categorisation of the mental illness of people in institutional care, appeared to strongly influence the document, which later had a significant effect on the development of the DSM by the American Psychiatric Association.2

Screening, referral and treatment for depression in patients with coronary heart disease

In 2003, an Expert Working Group of the National Heart Foundation of Australia (NHFA) issued a position statement on the relationship between “stress” and heart disease. They concluded that depression was an important independent risk factor for first and recurrent coronary heart disease (CHD) events.1 Here, we provide an update on evidence obtained since 2003 regarding depression in patients with CHD, and include guidance for health professionals on screening and treatment for depression in these patients. Our statement refers to depression in general (mild, moderate and severe), as all grades of depression have an impact on CHD prognosis. The process for developing this consensus statement is described in Box 1. Treatment decisions should take into account the individual clinical circumstances of each patient.

Epidemiology

The prevalence of depression is high in patients with CHD. Rates of major depressive disorder of around 15% have been reported in patients after myocardial infarction (MI) or coronary artery bypass grafts.3,4 If milder forms of depression are included, a prevalence of greater than 40% has been documented.3,4 Recently, the EUROASPIRE III study investigated 8580 patients after hospitalisation for CHD.5 The proportion of patients with depression, measured by the Hospital Anxiety and Depression Scale, varied from 8.2% to 35.7% in men and 10.3% to 62.5% in women. This is consistent with Australian and New Zealand data from a 6-year study, Long-term Intervention with Pravastatin in Ischaemic Disease (LIPID).6,7 At the end of this trial, 27% of men and 35% of women were identified as depressed, using the Beck Depression Inventory II (BDI-II) questionnaire.

A large systematic review in 2006 suggested that individuals with depression, but no current CHD, have a moderately elevated risk of 1.6 for a later index CHD event.8 This elevated risk was confirmed in the Whitehall II study of 5936 healthy individuals over a 6-year period, in which depression was associated with a hazard ratio of 1.93 for cardiovascular events.9 In the Nurses Health Study, 78 282 healthy women were assessed for depression. In the 6-year follow-up period, 4654 deaths were reported, including 979 deaths from cardiovascular disease.10 Depression was associated with increased all-cause mortality, with an age-adjusted relative risk of 1.76 (95% CI, 1.64–1.89).10 The effect of depression on CHD incidence is thought to be strongest around the time of the depressive episode, with longer-term effects mediated via recurrence of depression.11 In young people the association between depression and CHD may be stronger.12

The case–control INTERHEART Study included 11 119 patients with MI from 52 countries.13 Perceived stress and depression were shown to be important risk factors, which together accounted for 32.5% of the population attributable risk (PAR) for CHD, suggesting that together they were as important as smoking and more important than diabetes (PAR, 9.9%) and hypertension (PAR, 17.9%) as risk factors for CHD.13

For people with CHD and comorbid depression, the relative risk (RR) of death is increased (RR, 1.80 [95% CI, 1.50–2.15]), independent of standard risk factors for secondary prevention.8 Comorbid depression also leads to a higher risk of other adverse outcomes in patients with CHD, such as a lower likelihood of return to work, poorer exercise tolerance, less adherence to therapy, greater disability, poorer quality of life, cognitive decline and earlier dependency.1420 Individuals with CHD and comorbid depression often have less access to interventions for CHD, despite being in a higher-risk group.2123

Definition of depression and types of depression

The diagnosis of depression can be difficult in people with cardiovascular disease, as depressive symptoms such as fatigue and low energy are common to both CHD and heart failure, and may also be a side effect of some drugs used to treat cardiovascular disease, such as β-blockers.24 The diagnosis may be further complicated in such patients by their responses to their disease (and the associated stigma), which may include denial, avoidance, withdrawal and anxiety.

According to the Diagnostic and statistical manual of mental disorders, fourth edition (DSM-IV),25 major depression is diagnosed when there is a minimum of 2 weeks of depressed mood and/or lack of pleasure (anhedonia), accompanied by four or more other (listed) symptoms such as sleep disturbance, appetite disturbance, poor energy, psychomotor impairment or agitation, poor concentration or poor decision making, and suicidal ideas or thoughts of death. The association with CHD appears to increase with greater severity of depressive symptoms across the spectrum, with no discrete cut-off point at “major depression”. Some studies have suggested links between particular subtypes of depression, such as somatic or anhedonic depression, but these are not consistent findings.2628

Screening for depression in patients with coronary heart disease

Screening of a population group for a risk factor or disease is worthwhile when the risk factor or disease has a reasonably high prevalence, there is a robust screening test, and effective and cost-effective treatments are readily available.29,30 Depression is both a risk factor and a disease in its own right, and fulfils these criteria for population screening. Screening for depression in patients with CHD would be expected to produce a higher yield than screening for depression in the general population, owing to a much higher prevalence of depression in patients with CHD. It is important to recognise depression in patients with CHD in order to provide the best possible care. Asymptomatic patients with significant cardiovascular risk factors (eg, those with diabetes) may also be considered for screening, as they have a high risk of depression.31

Many self-reported screening tools exist with the aim of detecting possible depression. These include the Patient Health Questionnaire (PHQ-2, PHQ-9), the Cardiac Depression Scale (CDS), the BDI-I and -II, and the Hospital Anxiety and Depression Scale.32,33 The BDI appears to be the most commonly used tool in studies involving cardiac patients. The CDS was developed by a member of the Expert Working Group (D L H) specifically for patients with cardiac disease.33 The short version (short form) has only five items. There is limited but expanding information on the use of the PHQ-9 in patients with cardiac disease.34,35 It is used widely in primary care. Simple tools such as the Kessler Psychological Distress Scale (K10),36 a measure of general distress, will often overdiagnose depression. This tool is currently used in mental health plans in Australia; however, there is no evidence of its use specifically for patients with CHD.

Recognising the need for a simple screening tool for depression in cardiovascular patients, the 2008 American Heart Association (AHA) Science Advisory suggested the use of the PHQ-2.37 The PHQ-2 is an abbreviated form of the PHQ-9, with only the first two of the nine questions in the PHQ-9 (Box 2).38 There are also other versions of the PHQ-2, which may use shorter time frames. The AHA recommended the use of the PHQ-9 if depression was noted using the PHQ-2.37 The Royal Australian College of General Practitioners’ Guidelines for preventive activities in general practice (the red book) also uses a categorical (Yes/No) version of the PHQ-2.39

The PHQ-2 and the PHQ-9 screening tools are associated with reasonable sensitivity and specificity.34 Importantly, depression diagnosed with the PHQ-2 and the PHQ-9 has been shown to predict worse CHD outcomes. In the Heart and Soul Study, positive responses to either question in the PHQ-2 (Yes/No version) predicted a 55% greater risk of cardiovascular events.35 Furthermore, the validity of the PHQ-2 and the PHQ-9 has been assessed in a variety of patients with varying clinical problems, ages and ethnicities, including in Australian Aboriginal people from urban and rural areas and people from the Torres Strait Islands.4042 Adapted versions exist for use with Indigenous people.

Access to each of the tools varies. No copyright is breached by use of the PHQ-2 and the PHQ-9, and the PHQs and the CDS are free.43 However, some questionnaires such as the BDI-I and -II are subject to copyright and a royalty must be paid each time they are used.44

Implicit in the AHA Science Advisory37 is that screening and identification of patients with depression leads to appropriate treatment, or referral for treatment, by the responsible attending medical practitioner. Unfortunately, research has shown that screening may have little or no impact on the treatment of depression or on outcomes.45,46 Screening by nurses, researchers, receptionists or social workers is not sufficient without appropriate referral or treatment.

It is recommended that a simple tool, such as the PHQ-2 or the short-form CDS, is incorporated into routine screening of patients with CHD. Routine screening for depression is indicated at first presentation, and again at the next follow-up appointment. A follow-up screen should occur 2–3 months after a CHD event. Screening should then be considered on a yearly basis, as for any other major risk factor for CHD. Consideration should also be given to screening the partner or spouse of these patients for depression, as studies show that they are at an increased risk of developing depression.47 If screening is followed by comprehensive care, depression outcomes are likely to be improved.

Treatment of depression in patients with CHD

Collaborative care

Although individual treatment approaches and strategies have been studied, in practice a collaborative-care or stepped-care approach is probably optimal for managing patients with CHD and comorbid depression. The concept of collaborative care involves a group of health professionals working together in a coordinated manner, and this approach has consistently been shown to be associated with greater improvement in depression for patients with CHD compared with standard care, and to be cost-effective.4852 For example, collaborative care after coronary artery bypass grafting improved depression scores, but not physical function or re-hospitalisation rates.48 In patients with depression comorbid with poorly controlled diabetes and/or CHD, the collaborative approach resulted in improvement in depression scores, glycated haemoglobin levels, low-density lipoprotein cholesterol levels, and systolic blood pressure.50

The Coronary Psychosocial Evaluation Studies (COPES) trial52,53 used a stepped-care treatment approach in patients with acute coronary syndromes (ACS) and persistent depression. Depressive symptoms decreased substantially in the intervention (stepped-care) group. Only three (4%) of the intervention patients experienced major adverse cardiac events compared with 10 (13%) of the patients given usual care, suggesting improved cardiovascular outcomes. Moreover, this stepped-care approach was associated with a 43% lower total health cost over the 6-month trial period.54

Pharmacological therapy

The efficacies of fluoxetine,55 sertraline (Sertraline Antidepressant Heart Attack Randomized Trial [SADHART],56 Enhancing Recovery in Coronary Heart Disease Patients [ENRICHD] trial),57 citalopram (Cardiac Randomized Evaluation of Antidepressant and Psychotherapy Efficacy trial [CREATE])58 and mirtazapine (Myocardial Infarction and Depression Intervention Trial [MIND-IT])59 have been evaluated in clinical trials involving patients with CHD.

SADHART studied depression after ACS over 6 months. Depression scores in patients taking sertraline improved significantly more than in those receiving placebo. Most patients were also prescribed aspirin, statins and β-blockers. Life-threatening cardiovascular events occurred less frequently in the sertraline group; however, this result was not statistically significant.56

ENRICHD was a large trial that evaluated the effect of cognitive behaviour therapy (CBT) on depression or low social support in patients with a recent MI. Depression was diagnosed in 74% of participants. CBT improved depression but failed to reduce the number of CHD events. Patients whose depression did not respond to CBT were referred for treatment with antidepressant drugs. Selective serotonin reuptake inhibitors (SSRIs) (mainly sertraline) significantly improved depression in those patients. In the SSRI-treated group, there was a 43% (P < 0.005) reduction in deaths or recurrent MIs.57 However, this was a subset analysis, and therefore is hypothesis generating only.

In the Canadian CREATE trial58 and the MIND-IT trial,59 there were too few CHD events reported to enable analysis of cardiovascular outcomes.

Tricyclic antidepressants may worsen CHD outcomes and should be avoided in patients with CHD. Importantly, tricyclic antidepressants have been associated with increased mortality in patients with CHD.6062 In contrast, a recent meta-analysis of trials involving SSRIs in patients with CHD concluded that this class of drugs was well tolerated, with the risk of adverse events being similar to that for placebo.63

Psychological therapy

Of the various psychological therapies, CBT and integrative therapies (eg, interpersonal psychotherapy) have the best documented efficacy for treatment of major depressive disorder.64,65 CBT was used in the ENRICHD trial,57 interpersonal psychotherapy in the CREATE trial,58 and problem-solving therapy in the COPES trial.52,53 These therapies were all beneficial for depression but did not affect CHD outcomes. The efficacy of psychological therapy as a treatment for major or minor depression was evaluated in patients who underwent coronary artery bypass surgery.66 Significantly more patients in the CBT group (71%) and the stress management group (57%) had low depressive symptom levels, compared with those having usual care (33%), and these results were maintained at 6 months.66

A Cochrane review of psychological interventions for patients with CHD found evidence of small-to-moderate improvements in depression and anxiety symptoms with such interventions, but no strong evidence that the interventions reduced total deaths, risk of revascularisation, or non-fatal infarction.67 The interventions that were less effective were those that aimed to educate patients about cardiac risk factors; those that included client-led discussion and emotional support; or those that included family members in the treatment process.67 Uncertainty remains regarding the subgroups of patients who would benefit most from psychological treatments and the characteristics of successful interventions.

Exercise

Many patients with mild depression respond well to regular exercise and cardiac rehabilitation (exercise-based). A recent Cochrane review of exercise as a treatment for depression concluded that exercise improves depression with a similar efficacy to CBT.68 The benefit of exercise appears to have a dose–response relationship, needing at least 30 minutes of moderate aerobic activity on 5 days per week.69,70 This is consistent with usual public health recommendations.

The benefit of exercise in patients with CHD and depression has been demonstrated in the recent UPBEAT (Understanding the Prognostic Benefits of Exercise and Antidepressant Therapy) trial.71 Patients with CHD with at least a mildly elevated score for depressive symptoms (BDI score > 7) were allocated at random to treatment with SSRIs, exercise or neither. Exercise was equivalent to SSRI treatment in improving depression scores, with patients in both groups showing greater improvement than the control group.71 In a large randomised controlled trial (RCT) of 2322 patients with heart failure (of whom 28% were depressed), exercise, in addition to reducing mortality and hospitalisation (P = 0.03), significantly reduced depression (P = 0.002).72

Complementary and alternative therapies

Up to 50% of patients with depression have been shown to use complementary and alternative medicines without disclosing this to their treating clinician.73 Therapies that may be effective in depression are supplemental marine n-3 fatty acids (eicosapentaenoic acid [EPA] and docosahexaenoic acid [DHA]), S-adenosylmethionine (SAMe) and St John’s wort.74 Specific trials in patients with CHD and depression have not been performed with the latter two therapies.

Marine n-3 fatty acids (at a dose of 1 g per day of combined EPA–DHA) are recommended by the NHFA and the AHA for all patients with CHD.75 This dose also may improve mild depression. However, the addition of 2 g/day of combined EPA and DHA to sertraline 50 mg daily for depressive symptoms appears to provide no added benefit over sertraline alone.76 Some trials comparing St John’s wort and SAMe to antidepressant medications suggest a similar effectiveness in improving depression to antidepressant medications.7779 However, most commercial brands of St John’s wort have not undergone randomised trials.78,79

Adherence

Depression is a major predictor of poor adherence in patients with CHD, be it to drug therapy or lifestyle measures.80,81 Patients with depression are three times more likely to be non-compliant with medical treatment than patients without depression.82 Greater severity and chronicity of depression have been associated with poorer adherence to aspirin therapy after MI.83 Adherence to aspirin therapy after ACS has been shown to be significantly lower in persistently depressed patients (76.1%) than in those whose depression improved (87.4%), or who were not depressed (89.5%).83 Patients who are persistently depressed are also less likely to undertake behaviours that reduce risk; for example, quitting smoking, taking medications, exercising and attending cardiac rehabilitation.81 The SADHART trial showed adherence to medication increased after remission of depression in 68.4% of participants taking the trial medication.84

A recent RCT of a collaborative-care depression treatment program in 134 patients with depression after ACS demonstrated improved adherence to medications and secondary prevention behaviours and was independently associated with improvement in depression.85 However, in another RCT of 157 patients undergoing treatment for depression after ACS, there were no improvements in adherence to risk-reducing behaviours in spite of a significant reduction in depression.86

Referral

Once depression is identified through screening, treatment may be initiated immediately, or referral to psychological or psychiatric services may also be considered appropriate. Most patients with depression in Australia are managed by general practitioners.87

Members of the Cardiac Society of Australia and New Zealand, the majority of whom are clinical cardiologists, were surveyed regarding assessment of depression. Most respondents screened for depression occasionally, with only 3% using a formal tool. Lack of confidence in identifying depression was the strongest predictor of a low screening frequency. Cardiologists rarely initiated treatment for depression, and 43% did not feel they were responsible for treating depression.88

There can be a reluctance to treat depression in patients with CHD because of a belief that depression is normal after an acute cardiovascular event. Mild depression may resolve spontaneously; however, for most individuals with CHD, depression remains long term.89

Conclusion

A summary of the key evidence-based points is provided in Box 3 and Box 4, and the Appendix gives the National Health and Medical Research Council grades of recommendations and evidence hierarchy.

High-quality care for treatment of depression is achievable and affordable. The benefits of treating depression in people with CHD include improved quality of life, improved adherence to other therapies and potentially improved CHD outcomes.90 Effective treatment of depression may decrease CHD events but this is not proven, as no adequately powered trials have been completed, nor are there any ongoing.

1 Process used to develop this National Heart Foundation of Australia consensus statement

The Expert Working Group members performed relevant literature searches using key search phrases including, but not limited to, “stress”, ”depression”, “anxiety”, “treatment of depression”, “acute coronary syndromes”, “adherence and depression” and “screening for depression”. This was complemented by reference lists compiled from reviews and personal collections of the Expert Writing Group members.

Searches were limited to evidence available for human subjects with coronary heart disease published in English up to December 2012. The recommendations made in this consensus statement have been graded according to the National Health and Medical Research Council guidelines (see Appendix).2 The Cardiac Society of Australia and New Zealand, beyondblue: the national depression initiative and the Royal Australian and New Zealand College of Psychiatrists were consulted during the development of this document and have endorsed its content.

2 Patient Health Questionnaire (PHQ-2) Yes/No version35

  • During the past month, have you often been bothered by feeling down, depressed or hopeless?

  • During the past month, have you often been bothered by little interest or pleasure in doing things?

3 National Heart Foundation of Australia grades of recommendation and levels of evidence for screening, referral and treatment for depression in patients with coronary heart disease (CHD)2

Recommendation


Grade2

Level2


1

For patients with CHD, it is reasonable to screen for depression

A

I

2

Treatment of depression in patients with CHD is effective in decreasing depression

A

I

3

Treatment of depression in patients with CHD improves CHD outcomes

D

II

4

Treatment of depression in patients with CHD changes behavioural risk factors/adherence

B

III-2

5

Exercise is an effective treatment of depression in patients with CHD

A

I

6

Exercise improves CHD outcomes in patients with CHD

B

II

7

Psychological interventions improve depression in patients with CHD

B

II

8

Psychological interventions improve CHD outcomes in patients with CHD and depression

D

II

9

SSRIs improve depression in patients with CHD

A

I

10

SSRIs improve CHD outcomes in patients with CHD and depression

D

III-1

11

Collaborative-care approach improves depression in patients with CHD

B

II

12

Collaborative-care approach improves CHD outcomes in patients with CHD and depression

D

II


SSRIs = selective serotonin reuptake inhibitors.

4 Treatment of depression in patients with coronary heart disease (CHD) — summary of treatment subgroup effects showing grade of recommendation and level of evidence

Treatment

Depression


CHD outcome


Grade2

Level2

Grade2

Level2


Non-drug

Exercise

A

I

B

II

Psychological, including CBT

B

II

D

II

St John’s wort*

D

—*

D

—*

n-3 fatty acids

D

II

D

II

SAMe*

D

—*

D

—*

Collaborative

B

II

D

II

Drug

SSRIs

A

I

D

III-1


CBT = cognitive behaviour therapy. SAMe = S-adenosylmethionine. SSRIs = selective serotonin reuptake inhibitors. * Insufficient evidence to rate or no trials have been performed. Data not available in patients with CHD.

Appendix: Definition of National Health and Medical Research Council (NHMRC) grades of recommendations and evidence hierarchy*

Definition of NHMRC grades of recommendations

Grade

Description


A

Body of evidence can be trusted to guide practice

B

Body of evidence can be trusted to guide practice in most situations

C

Body of evidence provides some support for recommendation(s) but care should be taken in its application

D

Body of evidence is weak and recommendation must be applied with caution

NHMRC evidence hierarchy: designation of levels of evidence


Level

Intervention

I

A systematic review of level II studies

II

A randomised controlled trial

III-1

A pseudorandomised controlled trial (ie, alternate allocation or some other method)

III-2

A comparative study with concurrent controls:

  • Non-randomised, experimental trial

  • Cohort study

  • Case—control study

  • Interrupted time series with a control group

III-3

A comparative study without concurrent controls:

  • Historical control study

  • Two or more single arm study

  • Interrupted time series without a parallel control group

IV

Case series with either post-test or pre-test/post-test outcomes


* From NHMRC additional levels of evidence and grades for recommendations for developers of guidelines.2

Progressive multifocal leukoencephalopathy caused by BK virus?

The evidence is provocative but not definitive; nonetheless, it should serve as a stimulus to further research

In this issue of the Journal, Daveson and colleagues1 describe a case of progressive multifocal leukoencephalopathy (PML) possibly caused by BK virus rather than JC virus. This finding is potentially very significant. The polyomaviruses BK and JC commonly infect humans and remain latent in immunocomptent individuals. Both are associated with clinical disease in the setting of immunosuppression. However, only JC virus has been causally associated with PML; BK virus has been causally associated with nephropathy, ureteric stenosis and cystitis. There have been previous case reports of BK virus causing a meningoencephalitis and PML as detailed by Daveson et al, but the evidence for causality has been tenuous, largely because of the lack of confirmation in tissue. The data provided by Daveson et al are more convincing, although not definitive.

How could their evidence change clinical practice? First, the significance of PML being caused by BK virus is that the diagnosis of PML has until now only focused on the detection of JC virus.2,3 A large number of patients have or are presumed to have PML as a consequence of immunosuppression from cytotoxic chemotherapy (especially rituximab), from immunodeficiency related to HIV disease, or from immunomodulation related to the multiple sclerosis drug natalizumab.3 In a reasonable number of cases, JC virus is not detected in the cerebrospinal fluid or brain biopsy. This has been considered a consequence of insensitive assay tools, sampling error, or episodes of immune restoration inflammatory syndrome that may reduce viral DNA load before collection of cerebrospinal fluid.2,3 It is now conceivable that some of these PML cases may be caused by BK virus. Such cases would potentially be amenable to therapy: some evidence exists for efficacy with cidofovir, fluoroquinolones such as ciprofloxacin, and leflunomide.4,5 Further, risk stratification for PML in natalizumab-treated patients is currently heavily weighted towards the presence or absence of JC virus on serological testing.6 If BK virus truly can cause PML, such risk stratification strategies would need to include assessment for BK virus antibodies.

Given the potential importance of BK virus causing PML, how robust is the evidence of the causal link in Daveson et al’s report? While the authors did not find evidence for JC virus, it would have been helpful to have negative serology results for JC virus. The presence of enhancing lesions on magnetic resonance imaging is somewhat unusual for PML unless the patient has immune restoration inflammatory syndrome. Nonetheless, it can occur and has been recorded in about 10% of non-natalizumab-treated patients.2 The detection of BK virus in brain tissue, especially in the context of inflammation, raises the possibility that it was imported into the brain in inflammatory cells and that it is an “innocent bystander”. This is certainly possible but, on the other hand, no other cause was found and, in particular, JC virus was not detected by polymerase chain reaction. Further, it would be reassuring to know that the BK virus antibodies did not cross react with JC virus. Last, the case for BK virus causing PML would have been strengthened if there were data showing BK viraemia or viruria. Nonetheless, the BK viral DNA load in the cerebrospinal fluid was high, at 11 975 copies/mL; a level in the plasma of > 10 000 copies/mL is associated with a 93% specificity for presence of BK virus nephropathy.7 This level was quoted in recent guidelines by the Kidney Disease Improving Global Outcomes Transplant Work Group for the diagnosis of BK virus nephropathy.8 Given the implications of this observation, confirmation by an independent laboratory with validated and certified assays for these viruses could be reassuring.

The case for a causal link between BK virus and PML is still not solid, yet the details of the case report are highly suggestive. We consider the implications to be substantial. The report should prompt further research and meticulous analysis of future PML cases with particular attention to the issues we have outlined here.

A meta-analysis of “hospital in the home”

To the Editor: Caplan et al1 include in their meta-analysis a trial by Mather et al that compared home care with intensive care management of patients with acute myocardial infarction (AMI) between 1966 and 1968.2 A joint working party of the Royal College of Physicians and British Cardiac Society dismissed the results of this study because of design defects.3,4

Kalra et al5 performed a randomised trial with three arms for patients with acute stroke: stroke unit care, general ward care with stroke team support, and domiciliary care. Stroke units achieved a significantly lower mortality than general ward or domiciliary care. Caplan et al ignore the heterogeneity of the hospital arms, and sum their mortalities, creating a non-existent advantage for domiciliary care over hospital care. This meta-analytic technique is simplistic and invalid.

Hill et al describe home versus hospital management for patients with suspected AMI,2 as do Mather
et al.2 Studies of obsolete treatments, such as home management of patients with AMI, should have been excluded from the meta-analysis.

Rudd et al studied the effect of early discharge after stroke using a 1976 clinical definition of stroke.6 No details of imaging or comorbidities were given. The assumption of equipoise in the trial arms regarding morbidity is not met, and the study is not suitable for inclusion in the meta-analysis.

Indredavik et al7 studied the effect of early supported discharge versus ordinary care in patients with stroke, with 13 deaths at 26 weeks in the experimental group against 15 deaths in the control group. However, Caplan et al incorrectly report this as 21 and 26 deaths, respectively.

If these five most heavily weighted studies are excluded, no significant difference in mortality is seen
(243 hospital-in-the-home deaths [n = 2747] v 245 hospital deaths
[n = 2435], two-sided P = 0.14). Moreover, meta-analysis of the effect of location on mortality where the circumstances of the location are
not defined and not expected to be homogenous is invalid and makes
the mathematical exercise futile.

A meta-analysis of “hospital in the home”

In reply: Dickson argues for exclusion of randomised controlled trials (RCTs) if treatments have changed, but treatments are constantly changing so, following
this rule, meta-analysis would be impossible. Similarly, diagnosis
has changed — stroke was
a clinical diagnosis, then computed tomography was required, and
now magnetic resonance imaging
is needed. Equipoise is not a requirement for inclusion in a
meta-analysis.

Complaints about research being simplistic because it aggregates patients and groups demonstrates a misconception of research, which is designed to aggregate one factor while other factors differ — for example, study arms may have different mixtures of ages but similar average ages. The meta-analysis studied effects of two systems of care — hospital and hospital in the home (HITH) — not a particular diagnosis or treatment.1,2 Therefore it is legitimate to aggregate hospital patients and compare them with HITH patients.

Location and heterogeneity were mathematically defined, there was no heterogeneity for mortality data, and other outcomes were adjusted appropriately.

Results from the study by Indredavik et al were published in several reports, but (due to space limitations) only the primary report was cited. The data that Dickson refers to are in a report by Fjaertoft et al.3

Although the prevailing opinions
of the Royal College of Physicians
of London and the British Cardiac Society criticised the study by Mather et al in the 1970s, no contradictory facts or trials were cited at the time.4 Considering that other prevailing practices that were initially not examined by adequate RCTs led
to many iatrogenic deaths (eg, prophylactic use of antiarrhythmic drugs5), such practices should be examined and evidence of patient harm taken seriously, rather than simply dismissing evidence as obsolete.