×

Liability in the context of misdiagnosis of melanoma in Australia

Malignant melanoma is a disease for which misdiagnosis may have very serious ramifications for both patients and clinicians. Given how uncertain and difficult the diagnosis of some melanomas can be, clinicians may well be apprehensive about their potential professional liability arising from claimed misdiagnosis or mismanagement of melanoma. A recent Supreme Court of New South Wales decision1 is one of few Australian cases to directly address this issue specifically in relation to melanoma. Coote v Dr Kelly exists in the context of recent High Court of Australia decisions relating to the common law of professional negligence in Australia. Therefore, it is important to examine the particular facts of the case, how it was decided and why, and whether the court’s decision can reasonably be reconciled with what is understood of melanoma diagnosis clinically and from evidence-based medicine as well as a subsequent appeal which resulted in an order for retrial. We emphasise the importance of early recognition of uncertainty in diagnosis and subsequent escalation, particularly where delayed diagnosis may affect survival. This article provides medical practitioners with a better understanding of the uncertainty inherent in the law regarding certain issues of causation in negligence cases, and gives some guidance on an appropriate standard of care in the diagnosis of melanoma.

Coote v Dr Kelly: the facts

In September 2009, a patient consulted his general practitioner about a lesion on the plantar surface of his foot, which was diagnosed and subsequently treated as a plantar wart. Despite repeated attempts at cryotherapy, paring and topical treatments to the lesion, it continued to enlarge and change in shape and colour over the following 18 months. During this time, the patient was seen by the same doctor and, subsequently, by two other clinicians, all of whom continued to treat the lesion as a plantar wart. By March 2011, the lesion was noted to have substantially increased in size and to have ulcerated. It was then excised and diagnosed histologically as an invasive acral lentiginous melanoma (ALM). The lesion had metastasised, and the plaintiff faced a poor prognosis. Proceedings in negligence against the initial treating GP were brought before the NSW Supreme Court. Despite some uncertainty about the initial appearance of the lesion, the court found that the GP had breached his duty of care by failing to perform a biopsy on the lesion at an earlier stage; had he done so, it may have led to an earlier diagnosis of ALM. However, all the elements for negligence were not established. There was insufficient evidence to show that the breach of duty had caused the ultimate harm that befell the patient and, specifically, that an earlier diagnosis of ALM would have prevented the metastasis and subsequent poor prognosis.

Standard of care and breach of duty

Judicial findings

The court determined that the appropriate clinical standard of care was not met. A breach of duty of care was established, which was held to constitute the GP’s failure to observe a small black mark in the lesion at the initial consultation. The court determined that this ought to have drawn the attention of a reasonably competent practitioner to the need for further investigation. However, the New South Wales Court of Appeal has ordered a retrial (pending) on the basis of both flawed reasoning leading to the original conclusion of breach of duty, and flawed reasoning that such a breach, if it occurred, could nevertheless not be proven on the balance of probabilities to have caused the patient’s loss by using evidence of population-aggregated survival statistics.2

Commentary

Certain melanomas are inherently difficult to diagnose clinically, particularly those not fulfilling the classical ABCD criteria (asymmetry, border irregularity, colour variegation, and diameter > 6 mm).3 In one study, nodular, desmoplastic and ALM subtypes were not only associated with rapid and aggressive tumour growth but were also more likely to be clinically atypical.4 That is, they were more often amelanotic, symmetrical and elevated, with a regular border. Diagnostic features of these atypical melanomas are less effectively taught, and timely and accurate diagnosis presents a major challenge, particularly in the general practice setting.

Given the difficulties inherent in diagnosis, misdiagnosis of an atypical melanoma should not necessarily be considered to be a breach of a reasonable standard of care, especially given that GPs may see very few of these lesions during their careers. Whether a misdiagnosis constitutes a breach of duty of care is determined by a court on the basis of the admissible evidence, including expert peer professional opinion. In this case, the plaintiff’s evidence that there was pigmentation of the lesion at the initial presentation was critical in determining whether a breach of duty had occurred. In the absence of comprehensive clinical notes, it was difficult for the defendant to establish that there was no pigmentation at the initial presentation. All the clinicians involved in this case agreed that if there was pigmentation of the lesion, further investigation would have been warranted, as this might have indicated a diagnosis other than that of a plantar wart. The importance placed by the court on the presence or absence of pigmentation in determining the appropriate response of a reasonably competent practitioner is interesting and emphasised in the appeal judgment.5 It was largely based on the expert opinions provided. While the presence or absence of pigmentation may be an appropriate diagnostic clue, it is only one part of the broader clinical picture. The courts may tend to lend it excessive weight. Pigmentation alone should not dispose of the question of breach of duty of care. A more reliable clue to misdiagnosis may well be failed response to treatments that have been tried.

Repeated failed non-definitive treatments were continued by multiple clinicians, which allowed a significant amount of time to pass without the patient being referred to a clinician with peer-recognised specialist qualifications in the diagnosis and management of skin disease. On the evidence available, we consider that it was not the misdiagnosis per se that amounted to a breach of duty of care, but the lack of recognition of uncertainty and a failure to appropriately refer the patient or conduct further investigations once treatment failure became apparent. Definitive biopsy or escalation of care by referral to a specialist may each have averted the breach in this case.

From a clinical perspective, this reinforces the importance of accurate and precise documentation and personal communication with other clinicians. A change in presentation or a pattern of unsuccessful treatment, and hence uncertainty in diagnosis, can thereby be identified and acted on.

Causation and prognosis

Judicial findings

The court of first instance held that the evidence was insufficient to prove that an improved prognosis was probable, rather than possible, had the patient’s ALM been diagnosed and treated at first presentation. In the absence of proven causation of damage, despite a proven breach of duty of care, the court rejected the claim in negligence.

Commentary

The relationship between delay in diagnosis and poorer prognosis in progressive neoplastic disease may seem intuitive for many clinicians. The court’s decision on this point may therefore seem surprising. Indeed, the Court of Appeal rejected it.

The Breslow thickness of a melanoma at the time of removal is a major predictor of the likelihood of metastasis and therefore of overall prognosis.5 In Australia, the 10-year survival rate is 98% for lesions less than 0.76 mm thick but only 53% for lesions more than 3 mm thick; the outcome for people with distant metastasis is extremely poor (5-year survival rate, < 5%).6

Despite this, any direct relationship between delay in diagnosis and increased melanoma thickness remains controversial.69 It is recognised that melanomas vary widely in their rate of progression, particularly according to subtype, with certain subtypes such as nodular melanoma known to have a rapid vertical growth phase. One explanation for the apparent lack of a demonstrated relationship between diagnostic delay and tumour thickness is that tumour thickness at diagnosis may be more strongly related to the growth rate and biological aggressiveness of the tumour, rather than to the measured delay in diagnosis.9 Considering melanomas together as a homogeneous group, rather than as subtypes with widespread variability in rates of growth, has also been suggested to be a possible confounding factor for any measured association between thickness and delay in diagnosis.5 It therefore becomes difficult to retrospectively draw conclusions to determine the prognostic impact of misdiagnosis.

The court referred to the recent landmark High Court case of Tabet v Gett,10 where a claim of negligence resulting only in a loss of a chance of a better medical outcome was rejected. Tabet v Gett has authoritatively settled the point that the defendant’s negligence must be proven, on the balance of probabilities, to be the cause of the poorer outcome for the patient, compared with the expected outcome had the breach of duty not occurred. A loss of a chance of a better outcome is not of itself sufficient for a claim in negligence to succeed.

In general, in Australia, for there to be factual causation, it is necessary to prove that the harm to the plaintiff would not have occurred without the defendant’s breach. In Coote v Dr Kelly, the breach by the defendant was effectively that of misdiagnosing melanoma and delaying targeted treatment. The key question therefore became whether it could be proven that a difference existed between the prognosis for the patient at the time of the first presentation and the prognosis at the time that the melanoma was eventually diagnosed, and whether this difference was caused by the actions of his GP.

Much of the evidence relied on the interpretation by expert witnesses of epidemiological studies used for determining possible prognosis. The purpose of these studies is not to establish likely prognosis or to determine retrospective prognosis in any particular case. As a consequence, on the balance of probabilities, the first-instance court determined it was not proven that metastasis had not already occurred and, at the time of his initial presentation in 2009, this particular patient may have already had a poor prognosis.

The Court of Appeal expressly rejected the proposition that epidemiological studies cannot provide evidence sufficient to prove causation on the balance of probabilities of loss in an individual case. We respectfully agree with the Court of Appeal and strongly caution the medical profession against relying on an assertion (supported at first instance) that epidemiological evidence is incapable of supporting a legal finding of causation in any individual case. It is so capable. “There is nothing in Tabet v Gett . . . standing in the way of such a conclusion”.11

Melanomas with rapid vertical growth phases and aggressive histological features may have vertical progression rates > 0.5 mm in thickness per month4 and, in these cases, it is even more likely that delay in diagnosis may lead to a significant decline in prognosis. The importance of accurate and timely diagnosis of such lesions, in order to capitalise on a potentially short window of opportunity for improved prognosis, is a clinical imperative granted additional legal force by the recent Court of Appeal findings.

Conclusion

A high index of suspicion is always necessary when considering diagnoses that require early intervention to prevent significant harm to the patient. It may be difficult to diagnose atypical presentations of melanoma. By their very nature, such melanomas will have a higher rate of misdiagnosis.

Failure of initial treatment should trigger a recognition of uncertainty, an awareness of possible serious differential diagnoses and an understanding of the potential significance of error, with consequent escalation to specialist diagnostic and clinical care. Such recognition may be evident on the first consultation, or it may take several consultations before a pattern of uncertainty emerges. Consistency in documentation and communication among colleagues is essential where continuity of care is suboptimal. Recognising uncertainty and the need for escalation when recognised is a principle that applies to all aspects of clinical and histopathological practice. Its importance in the context of melanoma diagnosis, where delay may be a critical factor in the patient’s ultimate survival, is paramount. Establishing causation as a result of delayed diagnosis is often complicated, but it is possible, and epidemiological evidence may be employed to do it. If the litigation is not settled prior to retrial, the authors propose to publish further analysis of this important case as any judgment may become available.

Liaise with pathologists to refine understanding of the prostate-specific antigen test

To the Editor: Prostate-specific antigen (PSA) testing and prostate cancer, the focus of a number of recent articles in the Journal and a widely debated topic, are also the subject of an imminent position paper by the National Health and Medical Research Council (NHMRC). Given that there is general agreement on the potential for overdiagnosis and overtreatment of prostate cancer, it is important to continue our efforts to identify high-risk patients and patients for whom treatment is beneficial. The tools we have for this are the digital rectal examination, the PSA test and its refinements, and biopsy with Gleason scoring and its refinements.

The PSA test is a widely misunderstood measure for determining prostate cancer risk. Many authors seek to dismiss its value without careful analysis of recent advances, or acknowledgement that past research often has limited current relevance. For example, the article
by Del Mar and colleagues acknowledges PSA free-to-total ratios and the rate of increase in PSA,1 but makes no mention of age-related cut-offs, which is the method of refining utility of PSA testing.2 The article by Hugosson and Carlsson3 is well balanced and highlights the problems in trying to compare the evidence from four inadequate PSA screening studies and one well conducted study (ERSPC [European Randomized Study of Screening for Prostate Cancer]). Martin and colleagues attempt to study the cost effectiveness of PSA screening using a PSA cut-off of 4 ng/mL,4 despite the fact that all Australian laboratories should be using age-related cut-offs. Further, Martin et al ascribe a test cost for PSA testing of $37.55, which includes the cost for the measurement of multiple PSA fractions, even though these were not used in the screening trials

Pathologists and medical laboratory staff operate behind the scenes but seek to give referring doctors and their patients the best possible information in making clinical decisions about their care. The Royal College of Pathologists of Australasia has produced a position statement to assist in placing the PSA test, a useful clinical tool, in its correct context.5 Pathologists are also involved in the current national initiatives of the NHMRC and Prostate Cancer Foundation of Australia, which weexpect will provide balanced and helpful advice for clinicians and their patients.

Vitamin B12 and folate tests: the ongoing need to determine appropriate use and public funding

It’s not as simple as new for old: we need to follow a process for “disinvestment” in existing medical procedures, services and technologies

Criteria have been developed for assessing the safety, effectiveness and cost-effectiveness of new and emerging health interventions, but additional challenges exist in identifying opportunities for reducing the use of existing health technologies
or procedures that are potentially overused,
(cost-)ineffective or unsafe.1 Criteria have been proposed to flag technologies that might warrant further investigation under quality improvement programs.1 These criteria are: new evidence becomes available; there is geographical variation in use; variation in care between providers is present; the technology has evolved and differs markedly from the original; there exists a temporal trend in the volume of use; public interest or controversy is present; consultation with health care workers and funders raises concerns; new technology has displaced old technology; there is evidence of leakage (use beyond the restriction or indication); the technology or intervention is a “legacy item” that has never been assessed for cost-effectiveness; use is not in accordance with clinical guidelines; or the technology is nominated by clinical groups.

After such a nomination was made by members of the clinical laboratory community regarding vitamin B12 and folate tests, we sought to determine whether these tests met other criteria. We hope that this article will encourage debate and discussion about the appropriate use of these tests.

Testing for vitamin B12 and folate deficiency

Diagnosing vitamin B12 and folate deficiencies is difficult. The symptoms are diverse (such as malaise, fatigue and neurological symptoms), as are the signs (including megaloblastic anaemia and cognitive impairments). Defining target conditions is, therefore, also difficult. Tests include a full blood count and blood film examination, serum B12, serum folate and red-cell folate (RCF) assays, as well as examination of metabolic markers such as methylmalonic acid (MMA) and homocysteine (Hcy). Untreated vitamin B12 deficiencies may cause serious health problems, including permanent neurological damage (which may occur with low serum B12 levels without haematological changes). Maternal folate deficiencies have been associated with neural tube defects in infants. Potential vitamin B12 or folate deficiencies therefore need to be appropriately investigated and managed.

New evidence

The utility of a diagnostic test is influenced in part by its precision (the ability of a test to faithfully reproduce its own result) and its diagnostic accuracy (ability to discriminate between a patient with a target condition and a healthy patient). Evidence suggests serum B12 tests have poor discriminative ability in many situations, and debate is ongoing over which folate assay is most useful.

The only systematic review and meta-analysis of the diagnostic accuracy of serum B12 tests (conducted by members of our group) suggested that these tests often misclassify individuals as either B12 deficient or B12 replete.2 These findings are consistent with other reports in the literature. One recent report states:

Both false negative and false positive values are common (occurring in up to 50% of tests) with the use of the laboratory-reported lower limit of the normal range as a cutoff point for deficiency.3

And further:

There is often poor agreement when samples are assayed by different laboratories or with the use of different methods.3

Widespread CBLA (competitive-binding luminescence assay) malfunction has also been noted, with assay failure rates of 22% to 35%4 (interference due to intrinsic factor antibodies may explain some of this variation). While a critical overview has suggested that “falsely normal cobalamin concentrations are infrequent in patients with clinically expressed deficiency”, the author notes challenges in diagnosing subclinical deficiency5 (mild metabolic abnormalities without clinical signs or symptoms). Assessment of this evidence base is complicated by the lack of a universally accepted gold standard and by target conditions that are difficult to define, variable clinical presentations and variable cut-off values used to define deficiency.

For investigating folate status, RCF assays are thought to be less susceptible to short-term dietary intake than are assays for serum folate. However, it has been reported that:

The red cell folate assay is more complex to perform than the serum folate assay and requires more steps in sample handling before analysis, and this may be one of the reasons why the precision of the red cell folate assay is less than that of the serum folate assay.6

As discussion continues over which folate test is preferable, new evidence related to the prevalence of folate deficiencies in countries with mandatory food fortification has shifted the focus toward whether there is a need to perform any folate investigations in these jurisdictions. In Australia, mandatory fortification of wheat flour with folic acid was introduced in September 2009.7 Prevalence estimates from a sample of inpatients and outpatients suggested that folate deficiency stood at 0.5% in April 2010, showing an 85% reduction in absolute numbers since April 2009.7 While there is currently no evidence to suggest that the prevalence of megaloblastic anaemia caused by folate deficiency has been reduced, the low frequency of low serum RCF test results in countries where there is mandatory fortification of grain products with folic acid supports the perspective that “there is no longer any justification in ordering folate assays to evaluate the folate status of the patients”.8

Technology development

Over time, multiple technologies for analysing vitamin B12 status have become available, including assays for measuring holotranscobalamin (holoTC, the bioavailable form of vitamin B12), as well as metabolic markers such as MMA and Hcy.3,5 However, like all tests, these are imperfect: holoTC is expensive, not routinely available, itself reliant on poorly defined serum B12 reference ranges, and is yet to be confirmed as a superior test to the serum B12 assay.5 Hcy measurement is subject to artefactual increases due to collection practices, and reference ranges are variable. The availability of MMA tests is restricted to some clinical and research laboratories. As a result, the optimal procedure for measuring vitamin B12 is unclear. As noted above, while a number of approaches exist for assessing folate status, there is currently no consensus on the most appropriate laboratory investigation process.

Temporal, geographical and provider variations

Australian Medicare utilisation data have shown substantial growth in the use of item 66602, which relates to the combined use of serum B12 and folate tests. Between the financial years 2000–01 and 2009–10, use increased from 1082 services per 100 000 population to 7243 services per 100 000 population (21.78% average annual growth rate).9 Over the same period, spending on pathology services overall grew at an average annual rate of 6.3%.

Geographical variation was also present, with the number of services reimbursed for item 66602 ranging from 1962 per 100 000 population in the Northern Territory to 8658 per 100 000 population in the Australian Capital Territory in 2009–10.9 While some of this variation may be due to demographic differences and populations known to have access to fewer health services (eg, Indigenous Australians), the substantial temporal and geographical differences in use raise more questions about appropriate use of these tests, and whether or not they are underused or overused.

Guidelines

Guidelines related to the use of vitamin B12 and folate tests vary widely in their recommendations. While some recommend B12 and folate tests as screening tools in commonly encountered illnesses such as dementia, others suggest restricting testing to patients who have already undergone pretest investigations (such as full blood examinations; however, we note that neurological damage may occur in patients with low serum B12 levels and without haematological changes).10,11 Guidelines may differ on key recommendations, such as the preferred first-line investigation for establishing folate status, while some question the utility of folate investigations at all in jurisdictions where food is fortified with folate.1214

Leakage

With wide variability in guideline recommendations, and with few appearing to consider the diagnostic accuracy of B12 or folate tests, determining the extent to which services have “leaked” beyond their clinical indications is difficult. Possible leakage is evidenced by the use of serum B12 tests among patients presenting with weakness and tiredness, which is not supported by any available guidelines.15 A large study of general practitioners indicated that between April 2000 – March 2002 and April 2006 – March 2008 their use of serum B12 tests among patients presenting with weakness and tiredness increased by 105%.15

Discussion

Tests for investigating patients’ vitamin B12 and folate status have become widely used in clinical practice. Yet existing evidence suggests that the diagnostic accuracy of serum B12 tests is difficult to determine and may be highly variable. While other tests are available for investigating suspected B12 and folate deficiency (such as holoTC, MMA and Hcy), the diagnostic accuracy of these tests is also contested. Challenges in examining the diagnostic accuracy of serum B12 tests include highly variable clinical presentations, lack of a gold standard and inconsistent cut-off values used to define deficiency. While it remains under debate whether the serum or red-cell folate test is most useful for investigating folate status, mandatory folate fortification in Australia may call into question any use of these tests.

Temporal variation in use and geographical differences in how these tests are employed are both evident in Australian data. Moreover, available clinical guidelines are highly inconsistent in their recommendations. Collectively, the issues of test accuracy, wide variability in test use, and inconsistent guideline recommendations suggest that the use of vitamin B12 and folate tests is an area with much scope for quality improvement.

To improve the use of these tests, further assessment is needed that examines the complexity associated with clinical decision making and the various factors influencing why doctors request these tests. The decision to request an investigation such as a B12 or folate test may be driven by a range of factors, including ease of use, cost, absence of significant patient risk, the perceived need to respond to patient requests, lack of appreciation of the diagnostic accuracy of the tests, or ready availability of results.16 Understanding how these factors influence the use of B12 and folate tests may best be achieved through direct consultation with general practitioners, pathologists, specialists and consumers and is a critical step in advancing the assessment of these tests.

What can and should we predict in mental health?

David Foster Wallace, in his novel Infinite jest (1996), described the impulse for a person at the point of suicide as like that of jumping from a burning high-rise building. “Their terror of falling from a great height is still just as great as it would be for you or me … [but] when the flames get close enough, falling to death becomes the slightly less terrible of two terrors.”

This is one way to conceptualise the situation for at least 2273 people in Australia who committed suicide in 2011. To understand how to prevent such deaths, we need to know — following Wallace’s metaphor — the point at which an individual will choose to jump rather than face the flames. Can we predict this? And to what extent can we predict any outcomes in psychiatry?

Much effort has gone into trying to identify patients who are at particular risk of completed suicide within a year after presenting in psychological crisis or after attempting suicide. Ryan and Large (doi: 10.5694/mja13.10437) argue that predicting the short-term risk of suicide for such patients is not possible given the lack of identified risk factors that sufficiently discriminate between those who successfully suicide and those who do not. In addition, many people without the factors suggested as being associated with increased risk go on to attempt suicide. In this light, the authors recommend renewed, close clinical engagement with each patient’s situation and appropriate tailored intervention.

While prediction of suicide risk in individuals may be problematic, at a population level, suicide prevention has for some time been integral to mental health policies. Christensen and Petrie (doi: 10.5694/mja12.11793) outline six recommendations that should be part of any suicide prevention strategy. These revolve around targeting interventions, addressing risk factors, providing online resources and addressing gaps in community knowledge and discussion.

There have been recent moves to encourage more public discussion about suicide. But, as Fitzpatrick and Kerridge argue (doi: 10.5694/mja12.11540), community discussion should not simply be equated to media reporting of suicides. While there is some evidence of an association between increased media reporting and the incidence of attempted or completed suicide, they argue that this should not cloud strategies to encourage general community discussion about how suicide affects us all. There is reason to hope that talk beyond that filtered through the media prism will more effectively address the social and cultural factors that relate to suicide.

The possibility of identifying children at high risk of emotional and behavioural problems has been the basis of the expansion of the voluntary Healthy Kids Check for 3-year-olds to include social and emotional wellbeing. Will such assessment predict later problems? Daubney and colleagues (doi: 10.5694/mja12.11455) argue that although early intervention in this age group does have benefits, many children with problems go undetected, and none of the currently available assessment instruments are suited to screening. In addition, no symptom clusters are predictive of later psychopathology, and there are potential problems with overdiagnosis and overmedicalisation.

From the outsider’s perspective, there often seems to be more bad than good news in mental health, with controversy, lack of progress and confusing evidence. However, there are bright spots, even in those conditions previously thought to be intractable. Grenyer (doi: 10.5694/mja13.10470) shows that the prognosis for borderline personality disorder has significantly improved, with effective psychotherapies and support to patients and their family members offering real hope for more stable and productive lives. This should also lower sufferers’ risk of suicide, which, either attempted or completed, is a well known feature of the disorder.

Twelve years after the publication of Infinite jest, David Foster Wallace committed suicide on a background of worsening, difficult-to-treat depression. His strangely prescient metaphor of flames rising up a burning building reaches beyond literary dilettantism. Perhaps, in the context of suicide, we need to ask ourselves whether we know what causes the fire, and how much we understand the person poised in terror at the top.

An audit of dabigatran etexilate prescribing in Victoria

To the Editor: In 2009, the RE-LY (Randomized Evaluation of Long-Term Anticoagulant Therapy) trial compared dabigatran etexilate with warfarin for prevention of stroke and systemic embolism in 18 113 patients with
non-valvular atrial fibrillation (AF)
and at least one additional risk factor
for stroke.1 In April 2011, it became accessible in Australia (Pradaxa; Boehringer Ingelheim) under a product familiarisation program funded by
the manufacturer; however, the Pharmaceutical Benefits Advisory Committee expressed concern that without informed and appropriate prescription, clinical trial outcomes may not be reproducible.2 Case reports from Europe3 and New Zealand4 identified the need for caution when treating older patients, those with low body weight or patients with renal impairment because of the risk of serious bleeding.

We performed a retrospective audit of the available criteria of indication, renal function and time in therapeutic range (TTR) of 362 patients at a private anticoagulant clinic who were transferred from warfarin to dabigatran between 1 June 2011 to 30 November 2011. Patients recorded as having AF were presumed to have non-valvular heart disease, although this was not confirmed. The dose of dabigatran, the CHADS2 score (congestive heart failure, hypertension, age ≥ 75 years, diabetes, 1 point each; prior stroke or transient ischaemic attack, 2 points), the weight of the patient, and the patient versus clinician preference for therapy were not known to pathology service staff.

The patients in our cohort were older (mean, 76 years) than participants in the RE-LY trial (mean, 71 years). Fewer of our patients had significant renal impairment (14% had an estimated glomerular filtration rate [eGFR] < 50 mL/min/1.73 m2, versus 19%
of RE-LY participants), but 2% had
an eGFR < 30 mL/min/1.73 m2, and 12% had not had a renal function assessment in the past 12 months. Twenty-nine patients (8%) did not
meet the indication of having non-valvular AF.

A RE-LY subanalysis suggested
that dabigatran had no advantage over warfarin in reducing non-haemorrhagic stroke or death in patients with excellent international normalised ratio (INR) control (defined as TTR > 72.6%).5 In our cohort, TTR was assessed over a minimum of 6 months before patients were switched from warfarin to dabigatran, and the mean TTR (70%) was higher than in RE-LY (64%). In addition, one-third of our patients had TTR ≥ 79%, indicating high-quality anticoagulant control.

Our study confirms the prescription of dabigatran to a local population
that differed from RE-LY participants
in age, renal function and TTR, and occasionally for an unapproved indication. Although we cannot comment on clinical consequences,
care should be taken in extrapolating positive outcomes to non-equivalent patient groups, and continuing education campaigns are needed.

The quality of international normalised ratio control in southern Tasmania

To the Editor: The Australian Government’s recently completed Review of anticoagulation therapies in atrial fibrillation (AF) identified that the quality of international normalised ratio (INR) control was one of a number of factors creating uncertainty regarding the cost-effectiveness of novel anticoagulants in the Australian setting.1

INR control, usually expressed as the percentage of time in the therapeutic range (TTR), is critical in determining the relative efficacy, safety and cost-effectiveness of novel anticoagulants compared with warfarin.2,3 There is a strong correlation between TTR and clinical outcomes for patients taking warfarin.4,5 A review of retrospective studies in patients with AF found that a 7% improvement in TTR is associated with one less haemorrhagic event
per 100 patient-years and a 12% improvement is associated with one
less thromboembolic event per 100 patient-years.5

There is a paucity of data available regarding INR control in Australia outside of trial conditions. We conducted a retrospective observational study with the aim of determining the quality of INR control in a large cohort of patients in southern Tasmania. We examined INR results for southern Tasmanian people from December 2003 to November 2010. Data were obtained from the major private pathology provider and were screened to identify patients who were receiving monitoring for warfarin therapy, and who received continuous INR monitoring from 2007 to 2010. There were 1137 patients with continuous data for 2007–2010, spanning a mean of 3.5 years.

This group had a mean TTR (assuming a therapeutic range of 2–3) of 69.1% and a mean testing interval of 22.9 days. Patients spent a mean of 18.3% of their time with an INR < 2 and 12.6% of their time with an INR > 3. The proportion of patients with a mean TTR < 60% was 22.3%, and 52.5% had a TTR > 70% (Box).

The observed mean TTR of almost 70% is superior to that reported in the warfarin arms of recent clinical trials comparing novel anticoagulants to warfarin (mean TTR, 55%–64%).1 Optimisation of TTR is critical for patients who are prescribed warfarin, and TTR could be used as an ongoing audit parameter to identify patients with poor INR control and provide the impetus for interventions to reduce the risk of warfarin-related adverse events. The use of TTR in general practice systems and pathology reports would give prescribers a sound basis on which to make decisions about a range of interventions that could improve the quality of warfarin management. The TTR could also act as an indicator to switch therapy to an alternative anticoagulant if reasonable INR control could not be achieved.

In summary, Australian “real-world” INR control was better than expected and, at the observed level of INR control, the benefits of novel anticoagulants compared with warfarin may be lost for most patients.1

Distribution of patients in southern Tasmania who received continuous warfarin monitoring during 2007–2010, according to mean percentage time in therapeutic range (TTR) (n = 1137)

The dotted line indicates a TTR of 60%. This is considered in the literature to be a benchmark for acceptable INR control.1

Forming networks for research: proposal for an Australian clinical trials alliance

A research network could improve outcomes through advocacy, identifying research gaps and providing shared infrastructure

Research benefits from both competition and collaboration. Inefficiencies occur when researchers are engaged in similar research, often not realising that other groups in Australia are working in the same area. For example, it is possible that there are competing clinical trials in uncommon cancers, which will decrease the chance of any individual study recruiting adequate numbers of patients to answer the questions it poses. In May 2012, the Medical Journal of Australia hosted the MJA Clinical Trials Research Summit. This article was written on behalf of contributors to a working group discussion on networking held during that summit.

There are enormous advantages for clinical researchers working together in networks. Centralised coordination and accumulation of data will provide both greater statistical power to answer common research questions and opportunities to resolve uncertainties about hard clinical end points with the greatest impact on participants’ lives. Centralising these functions allows clinical trials to be performed efficiently. Important roles for research networks are summarised in the Box.

It would be easiest to form and sustain networks if there were an umbrella group to help foster such networks, and a business case can be derived to support this.1 Such an umbrella group could advocate for the importance of clinical research in improving health care for all Australians, provide the infrastructure to maintain local clinical research networks, and help foster and maintain new clinical trials sites. An umbrella group could help to leverage additional funding from government, community and commercial sources for worthwhile research projects. Additional funding could also assist in bringing groups in similar research fields together, in providing access to common resources and experienced staff, in enabling collaboration to develop standard operating procedures, and in helping groups to obtain access to databases and web-based (and e-health) functionality. Biostatisticians and methodologists could be shared between groups.

A central overview of the research proposals of research groups in the network may identify projects that would be best funded by project grants as opposed to those that could be part of larger program funding. Even in the short term, a coordinating group could help local researchers add value to their clinical trials by linking them with experts in building quality-of-life studies into Phase III trials, or adding DNA or epigenetic substudies, which can add significant value when nested into the original study design. All these opportunities can only be realised by bringing together people whose work may otherwise be in very different arenas.

An umbrella group may help enable clinical data and pathology records to be linked with established blood, DNA, and tissue banks, and help to form new biobanking resources and expertise. The most common data linkage is between cancer registries and death registries so that incidence and survival data are linked. Tissue banks allow linkage of pathology specimens with clinical registry data to explore prognostic factors. These linkages must be governed by policies protecting the privacy of individual data, which should be de-identified in the final reports. There is also the opportunity to link existing disease-specific trials groups, such as in the Australasian Stroke Trials Network or the Australia and New Zealand Breast Cancer Trials Group.

An umbrella group could bring together networks for scientific meetings and meetings on topics of common interest around the funding, infrastructure and management of clinical research. For example, a significant area of interest is the translation of research not only from the laboratory to the clinic, but from the clinic into economic and public policy. An umbrella group could help networks make the connections necessary to facilitate these aspects too.

The alliance we propose is not designed to prescribe the composition or governance of groups. However, it should add value by facilitating the development of clinical research groups and helping to achieve long-term funding to resource and sustain them. Networks of research groups could, for example, be formed within a specialty college or by special interest groups, or within networks of hospitals or universities. Some of the funding to support the research would come from within such groupings. Multiple models of networks would evolve to best fit the clinical research activity and existing clinical and research relationships. The sustainability of groups would be predicated on sharing the role of principal investigator across the centres in the networks for different studies, and continuing to attract young investigators into the networks and mentor them.

As connections between groups become well established, clinical research groups could mentor newly established groups. This would enable experience, adaptable resource materials and even infrastructure to be shared, which could help to ensure a more rapid route to productivity and world-class-standard research work for new trials groups. The ability to share infrastructure and even experienced research staff could make new initiatives less costly to undertake.

An alliance or umbrella group could also identify where gaps exist in clinical research funding. For example, it could identify research that still needs to be undertaken by understanding where evidence is lacking from practice guidelines, and knowing what trials are ongoing from trials registries. Further, an alliance could identify deficiencies in funding for specific levels of research personnel, such as mid-career researchers.

In the longer term, it is possible to envisage such an umbrella group facilitating the development of accreditation standards for clinical trials groups and their investigators. It could liaise with consumer groups to strengthen consumer input into clinical trials for the mutual benefit of aligning research directions with consumer priorities as much as possible. Currently, this happens only sporadically.

Funding for an umbrella group is a key issue. The National Health and Medical Research Council would be an ideal body to consider this, and has previously had schemes such as enabling grants and program grants to fund large initiatives involving groups of researchers. What is needed is sustainable infrastructure funding to give stability to the clinical trials networks and their research teams, and to allow planning of not only Phase I to Phase IV clinical trials, but also larger longitudinal clinical studies, which are also necessary to inform clinical practice.2 It would be unreasonable for one body to be expected to provide all of the funding required, so the umbrella group must be able to leverage funding from the spectrum of parties involved in clinical trials.

An important role for an umbrella group for clinical trials is advocacy, to encourage support for clinical research itself. This involves not only showing the clinically beneficial outcomes of trials that better inform health care for patients with similar conditions, but also the high return on investment that has already been shown by putting the results from clinical trials into practice. Highlighting the better outcomes that have been reported in patients participating in centres with trials programs is a major factor. Key partners in this advocacy are patients and their families, particularly those who have been involved with research and have personally experienced its benefits.

More than just advocating for funding, the umbrella group would be promoting a culture of research in all health care settings, including in primary care and in our hospitals, where the challenges of funding patient care in the short term can become all-consuming. Research not only leads to improved medical outcomes in those hospitals that participate in trials, but is cost-effective because even the control arm of a randomised trial often attracts funding that can save on routine care costs.3,4

Roles for a research network

  • Networks of researchers formed will be more effective at promoting a research culture and securing sustained funding.

  • An umbrella group should be responsible for:

    • providing centralised expertise

    • biostatistical support

    • adding value to clinical trials by identifying substudies

    • bringing the networks together to explore common interests.

Clinical effectiveness research: a critical need for health sector research governance capacity

The barriers to conduct of clinical research will require solutions if we are to implement evidence-based health care reform

Reforms in the funding of health services, such
as “activity-based” funding initiatives, seek to facilitate changes in how health care is delivered, leading to greater efficiency while maintaining effectiveness. However, often these changes in treatment strategies and service provision evolve without evidence demonstrating effectiveness in terms of patient outcomes. The pressures on health care expenditure (currently around 9% of gross domestic product1) make such an approach untenable and unsustainable. The evidence necessary to support these initiatives can only be derived through carefully conducted clinical research. Most readers would immediately think of clinical trials in terms of pharmaceuticals or clinical devices, and this type of research is critically important, although continuing to decline, in Australia.2 Other questions relate to the effectiveness of changes in health practice or policy, usually (but not always) based on sensible ideas that seem self-evident. However, in order to function with an evidence base, these ideas need to be proven to be clinically effective and cost-effective. Such research can be costly, and many of the questions to be addressed are not ones that would be the subject of an industry-sponsored trial. Researchers, clinicians and health administrators are therefore faced with the problem of how best to measure the outcomes of changes to health care strategies, without the necessary resources to ask and answer the question.

The MJA Clinical Trials Research Summit held in Sydney on 18 May 2012 included a working group addressing issues of research governance and ethics. The key discussion outcomes of that group were:

  • confusion exists regarding the differences between ethics and governance;

  • variability continues in state and federal legislation and regulations, despite attempts at harmonisation;

  • processes for improvement at government and institutional levels are underway but are not yet complete or implemented;

  • hospital boards and chief executive officers need to have incentives to make the infrastructure work;

  • substantial challenges exist when working with international investigator-initiated trials;

  • trials involving the private health sector include specific difficulties such as insurance and contracts; and

  • national accreditation of researchers and training should be considered.

Costs are not the only barrier. Efforts to rationalise health care provision on the basis of evidence provided through the conduct of clinical research are also hampered by existing or perceived obstacles in the form of cumbersome institutional research governance and ethics approval processes. Substantial changes and streamlining of the processes of ethical review are underway across Australia, addressing inconsistencies and inefficiencies of human research ethics committee approval, financial processes, and contractual clinical research governance processes. Nevertheless, the system remains complex, slow and expensive. Unfortunately, the old adage of “good, quick or cheap: pick two” still applies.

Many researchers fail to distinguish between research governance and ethics. Clinical research in Australia is governed by the National Health and Medical Research Council (NHMRC) National statement on ethical conduct in human research3 and the Australian code for the responsible conduct of research.4 Research governance can “be understood as comprising distinct elements ranging
from the consideration of budgets and insurance, to the management and conduct of scientific and ethics review”.5 Research governance thus includes oversight of all processes of ethics review, but also includes responsibilities of both investigators and institutions for the quality and safety of their research.3

The Harmonisation of Multi-centre Ethical Review (HoMER) initiative by the NHMRC is a significant step forward, enabling a single ethics review process that has been adapted for several states. This process, if used effectively, should reduce the resources required to obtain ethics approval for multicentre research, but it has also created some challenges in ensuring that research governance obligations are maintained within various health service jurisdictions.6 Currently, no incentives or requirements exist for health services or hospital chief executive officers to ensure that appropriate infrastructure is in place and working. Similarly, a different set of challenges arises when considering performing research in the private sector, where insurance and contractual issues may differ substantially from those in the public sector.

Much of the non-industry-sponsored clinical research performed in Australia is investigator-initiated research, supported by funding organisations such as the NHMRC, state governments, and other non-government organisations such as Cancer Council Australia, the National Heart Foundation of Australia and cooperative clinical trial groups. At present, investigator-initiated trials require comparable levels of research governance and are certainly subject to the same requirements for good clinical practice as industry-sponsored trials. The research questions addressed by these studies are based on clinical imperatives, a broad understanding of the underlying science, and a necessary ability to work on a shoestring — the latter being the main point of distinction from industry-sponsored trials. Current models of competitive research grant funding do not recognise the complexities, duration, costs and distribution of costs across the length of a clinical trial, especially when considering late clinical outcomes that are often the most clinically relevant ones. As an example, an NHMRC project grant can be funded for at most 5 years and therefore necessitates a focus on end points occurring within end-point time frames. The clinical questions that we and the community recognise as important might not be able to be answered with such designs. The resources required to meet these requirements continue to escalate and we currently run the risk that these trials will soon be untenable in Australia. Anecdotally, many academic clinical research units are already questioning what level of involvement they should have in such relatively underresourced trials or if they should be involved at all, for the most part purely for financial reasons.

Within the current Australian health care environment, clinical research is being conducted in the face of significant headwinds. These inefficiencies arise from resource costs due to complex governance arrangements combined with those of research conduct (Box). Processes to be considered that will improve clinical research capacity might include:

  • continued adoption of electronic health records that span clinical, investigative (ie, pathology and radiology) and therapeutic information (eg, the Australian Orthopaedic Association National Joint Replacement Registry);

  • data-linkage techniques to obtain clinical outcomes
    (eg, hospital readmission data, Medicare Benefits Schedule and Pharmaceutical Benefits Scheme use data, the National Death Index);

  • better integration of research into routine clinical practice;

  • national accreditation of investigators;

  • standardised good clinical practice training;

  • increased profile for research participation at the clinician–patient level, enabling the conduct of studies that are more representative of a wide spectrum of patients;

  • development of a clinically relevant strategic research agenda led by collaborations between clinicians, researchers and health policy decisionmakers;

  • a culture shift where lawyers and hospitals communicate and quantify the risk or research appropriately.

Research developed through partnerships between health policymakers and health service providers should lead to outcomes that are more immediately relevant and translatable to the care we provide, the outcomes we achieve and the costs incurred by the health system. Reinvestment of financial and efficiency gains realised from initial research outcomes back into the next relevant translational research question provides a model for a sustainable health system that evolves with the support of a robust clinical research-driven evidence base. These financial windfalls currently go back into government coffers and ideally should be seen as a potential funding stream to support future clinical research.

As the demands on our health system continue to mount, the need for clinical effectiveness research to build a robust evidence base upon which to reform care has become even more acute. It will be critical to align the clinical and policy research agenda while strengthening the governance structures that facilitate the conduct of research within the clinical space if we are to develop “an agile, responsive and self-improving health system for future generations”.7

Key points

Barriers to clinical research include:

  • regulatory complexity

  • inflexibility of ethical review and oversight

  • funding models that are not designed to support clinical trials

  • lack of incentive for engagement of health services in research support.

Solutions may include:

  • different funding models, including support for longer time frames

  • simplification of ethical and governance processes recognising the different goals of industry- versus investigator-initiated research

  • better involvement by health services in supporting research

  • return of savings from clinical research to support further research

  • clinical research key performance indicators for health service administrators.

BK virus-associated progressive multifocal leukoencephalopathy

Progressive multifocal leukoencephalopathy (PML) is associated with the polyomavirus JC virus, while BK virus is causative of polyomavirus-associated nephropathy. We report the first definitive case of BK virus-associated PML. This case highlights the importance of biopsy for aetiologic diagnosis in the setting of viral latency and the absence of clear T-cell dysfunction or biologic therapy.

Clinical record

A 71-year-old woman with a low-grade B-cell non-Hodgkin lymphoma was referred by her haematologist for investigation of a 2-week history of progressive apraxia and right-sided neglect. Her past medical history also included Sjögren syndrome and hypogammaglobulinaemia. Her non-Hodgkin lymphoma was complicated by bone marrow infiltration, low-grade lymphadenopathy and splenomegaly. She had received oral chlorambucil 3 years before, but had not been on any therapy for the past 18 months as her disease was stable. She was receiving 6-weekly immunoglobulin infusions at the time of admission.

On examination, the patient had right upper-limb drift and incoordination, with normal power, reflexes and tone throughout. She had difficulty following three-step commands, and had paraphasia and constructional apraxia. She did not have palpable lymphadenopathy but had an enlarged spleen 10 cm below the costal margin. A computed tomography scan of the chest, abdomen and pelvis revealed stable lymphadenopathy and splenomegaly. Full blood examination and chemistry showed stable mild trilineage pancytopenia with a haemoglobin level of 110 g/L (reference interval [RI], 115–160 g/L), white cell count of 3.6 × 109/L (RI, 4–11 × 109/L) and platelet count of 118 × 109/L (RI, 150–400 × 109/L). Her lactate dehydrogenase level was not elevated. Her liver function test results showed a cholestatic pattern of liver dysfunction with an alkaline phosphatase level of 132 U/L (RI, 20–110 U/L) and a gammaglutamyl transferase level of 72 U/L (RI, 12–43 U/L). She also had stage 3b chronic kidney disease with an associated creatinine level of 157 μmol/L (RI, 40–90 μmol/L). Her CD4+ lymphocyte count was 210 × 106/L (RI, 600–1400 × 106/L). The patient was HIV negative. Her cerebrospinal fluid (CSF) showed five mononuclear lymphocytes (RI, < 5 white cells, all mononuclear) and seven erythrocytes (RR, zero), a normal protein level and no abnormality on lymphocytic immunophenotypic analysis. Real-time 59-nuclease polymerase chain reaction (PCR) assays specific for JC virus (JCV) DNA and BK virus (BKV) DNA were performed on DNA extracted from the CSF. The probes and primers used (Box 1) targeted the coding region of the large T-antigen. JCV DNA was not detected; BKV DNA was detected at a viral load of 11 975 copies/mL (normally undetectable). Cryptococcal antigen testing on serum and CSF was negative, as were PCR tests for Epstein–Barr virus DNA and toxoplasma DNA on CSF. Gadolinium-enhanced magnetic resonance imaging (MRI) of the brain showed two areas of abnormality in the posterior left frontal lobe in the subcortical and the periventricular regions. These areas showed low signal on T1-weighted imaging and high signal on T2-weighted (Box 2) and FLAIR (fluid attenuated inversion recovery) imaging. The deep white matter lesion showed peripheral enhancement. Additionally, non-specific high-T2 signal periventricular abnormalities were found.

While these investigations were proceeding, the patient’s neurological state deteriorated. Although the MRI findings were thought more likely to be consistent with progressive multifocal leukoencephalopathy (PML), a negative PCR result for JCV DNA was against this diagnosis. Consequently, a trial of dexamethasone was administered, as the main radiological differential diagnosis was lymphoma. The patient’s symptoms did not improve. After receipt of the positive PCR result for BKV DNA in the CSF, the possibility of BKV-associated PML was entertained, but as this had not previously been definitively described, a brain biopsy was deemed necessary for a definitive diagnosis. The patient underwent a brain biopsy 1 month after admission.

Deep and superficial brain biopsy samples were taken, yielding eight fragments of tissue 2–4 mm in maximum extent. Sections stained with haematoxylin and eosin showed reactive gliosis with occasional large atypical astrocytes, numerous foamy macrophages and a sparse perivascular mononuclear infiltrate (Box 3). Immunohistochemistry using antibodies for glial fibrillary acidic protein and neurofilaments showed gliosis with relative sparing of axons, in keeping with demyelination. Sections were stained using BK antibody directed against the BKV large T-antigen (Clone BK-T.1, Chemicon International). Intranuclear inclusions in scattered cells stained positively for BKV (Box 3). JCV staining was not performed. Electron microscopy with a JEOL JEM 1011 transmission electron microscope was performed on reprocessed paraffin block material and confirmed the presence of spherical viral particles measuring 40–45 nm, consistent with polyomavirus (Box 4). Cells containing polyomavirus particles showed oligodendrocytic differentiation. Nuclei were large and the cytoplasm showed numerous polyribosomes and rough endoplasmic reticulum. The same PCR protocol as described above was performed on the brain biopsy tissue and was strongly positive for BKV DNA but negative for JCV DNA. Extracted DNA was amplified by an in-house BKV DNA PCR using the primers modified from those originally described.1 The resulting PCR product was sequenced with the same primers, and sequences were confirmed to be BKV DNA using a National Center for Biotechnology Information BLAST search (http://blast.ncbi.nlm.nih.gov). Sequences were 100% concordant with BKV isolate SJH-LG-310 (GenBank accession number JN192440).

A diagnosis of BKV-associated PML was made and, given the irreversible underlying immune deficits, age and progressive neurologic status, a palliative approach was taken. The patient died 6 weeks after undergoing the brain biopsy.

Discussion

The polyomaviruses BK and JC commonly infect humans and remain latent in immunocompetent individuals.2 Both are associated with clinical disease in the setting of immunosuppression — BKV with polyomavirus-associated nephropathy and haemorrhagic cystitis, and JCV with PML.

More recently, BKV has been described as an emerging pathogen in the central nervous system.3 However, its true pathogenicity is complicated by its known central nervous system latency in asymptomatic immunocompetent and immunocompromised individuals.4 This confounds any possible new pathogenic associations with disease.

BKV is newly recognised as a causative agent of meningoencephalitis and has been demonstrated in the tissue and CSF of affected patients.58 To our knowledge, there have been two proposed cases of BKV-associated PML described in the literature. However, definitive diagnoses were not made in these cases, as both relied on BKV isolated from CSF as a surrogate marker, with no evidence of BKV in the tissue.9,10 Given the problem of latency in multiple tissues, though, it was plausible to assert that BKV was the causative agent. Nevertheless, a brain biopsy was deemed necessary for diagnosis in the case we describe.

JCV-associated PML is most commonly associated with HIV/AIDS. It is also known to occur in patients with multiple sclerosis who are treated with natalizumab and in individuals with idiopathic CD4+ lymphopenia and other T-cell deficient states.11,12 It is plausible to assume that, similarly, a T-cell immunodeficiency may be necessary to predispose an individual to BKV-induced PML. Although the patient described had both Sjögren syndrome and a haematological malignancy with known hypogammaglobulinaemia, her T-cell count was above that generally associated with HIV-associated PML, at 210 cells/μL at the time of diagnosis. However, a functional T-cell impairment is still a possible predisposing factor for the development of disease in this case.

In our patient, the presence of BKV DNA without JCV DNA in the CSF brain biopsy specimens, and the associated findings on histopathology and electron microscopy, support the diagnosis of BKV-induced PML. Although JCV-specific immunohistochemistry was not performed, the stain used is reportedly specific for BKV and does not cross-react with JCV or the related polyomavirus, Simian virus 40.13 DNA sequencing definitively established the presence of BKV DNA in our patient’s brain tissue and, consequently, we suggest that this evidence supports our diagnosis of the first biopsy-proven case of BKV-associated PML.

1 JC virus (JCV) and BK virus (BKV) large T-antigen primers and probes

Designation

Sequence, 59–39


Primer

BKV forward

TGCTTCTTCATCACTGGCAAAC

BKV reverse

GGTGCCAACCTATGGAACAGA

JCV forward

TGATGGTTAAAGTGATTTGGCTGAT

JCV reverse

TTGCTTATGGGCATGTACTTAGACTTT

Probe

BKV

CACCAGGACTCCCAC (6-FAM fluorophore and MGB-NF quencher*)

JCV

TCACATTTTTTGCATTGCT (6-FAM fluorophore
and MGB-NF quencher*)


* FAM = 6-carboxyfluorescein. MGB-NF = minor-groove binding
non-fluorescence.

2 Gadolinium-enhanced magnetic resonance image of the patient’s brain

T2-weighted image showing high signal in the periventricular (solid arrow) and subcortical (dashed arrow) white matter of the posterior left frontal lobe.

3 Stained histological sections of the patient’s deep brain biopsy specimen

Main image: Haematoxylin and eosin staining (original magnification, 340) showing two cells with well defined intranuclear inclusions. Insert: Immunoperoxidase staining for BK-T antigen showing variable but focally strong nuclear staining (original magnification, 340).

4 Transmission electron micrograph of viral particles in the patient’s deep brain biopsy specimen

Main image: Part of an infected glial cell (C) and part of the cell nucleus (N) (original magnification, 330 000). A myelinated axon can also be seen (arrow). Insert: A nucleus with polyoma-type viral particles 40–45 nm in diameter (original magnification, 3120 000).